Getting Started with the AWS Glue Data Catalog - AWS Glue

Getting Started with the AWS Glue Data Catalog

The AWS Glue Data Catalog is your persistent technical metadata store. It is a managed service that you can use to store, annotate, and share metadata in the AWS Cloud. For more information, see AWS Glue Data Catalog.


You can use this tutorial to create your first AWS Glue Data Catalog, which uses an Amazon S3 bucket as your data source.

In this tutorial, you'll do the following:

  1. Create a database

  2. Create a table

After completing these steps, you will have successfully used an Amazon S3 bucket as the data source to populate the AWS Glue Data Catalog.

Step 1: Create a Database

To get started, sign in to the AWS Management Console and open the AWS Glue console at

  1. In the AWS Glue console, choose Databases from the left-hand menu.

  2. Choose Add database.

                        The screenshot shows the Databases list view and Add database button in the AWS Glue console.
  3. Enter a name for the database. For this tutorial, you can name the database ‘My First Database’. In Description and location (optional), you can optionally add a database description and location.

                        The screenshot shows the Database name field on the Add database
                            page in the AWS Glue console.

Congratulations, you've just set up your first database using the AWS Glue console. Your new database will appear in the list of available databases. You can edit the database by choosing the database's name from the Databases dashboard.

Next Steps

In the next section, you'll create a table and add that table to your database.

You can also explore the settings and permissions for your Data Catalog. See Working with Data Catalog Settings in the AWS Glue Console.

You just created a database using the AWS Glue console, but there are other ways to create a database:

  • You can use crawlers to create a database and tables for you automatically. To set up a database using crawlers, see Working with Crawlers in the AWS Glue Console.

  • You can use AWS CloudFormation templates. See Creating AWS Glue Resources Using AWS Glue Data Catalog Templates.

  • You can also create a database using the AWS Glue Database API operations.

    To create a database using the create operation, structure the request by including the DatabaseInput (required) parameters.

    For example:


    aws glue create-database --database-input "{\"Name\":\"clidb\"}"


    glueClient = boto3.client('glue') response = glueClient.create_database( DatabaseInput={ 'Name': 'boto3db' } )

For more information about the Database API data types, structure, and operations, see Database API.

Step 2. Create a Table

In this step, you create a table using the AWS Glue console.

  1. In the AWS Glue console, choose Tables in the left-hand menu.

  2. Choose Add tables. From the drop-down menu, choose Add tables manually.

  3. In the Add table wizard, set up your table's properties by entering a name for your table.

  4. In the Database section, choose the database that you created in Step 1 ('My First Database') from the drop-down menu. Choose Next.

  5. In Add a data store, choose S3 as the type of source.

  6. In the section Data is located in , choose Specified path in another account.

    1. Copy and paste the path for the Include path section.


    2. Choose Next.

  7. For Classification, choose CSV. and for Delimiter, choose comma (,). Choose Next.

  8. You are asked to define a schema. A schema defines the structure and format of a data record. Choose Add column. (For more information, see See Schema registries).

  9. Specify the column properties:

    1. Enter a column name.

    2. For Column type, 'string' is already selected by default.

    3. For Column number, '1' is already selected by default.

    4. Choose Add.

  10. You are asked to add partition indexes. This is optional. To skip this step, choose Next.

  11. A summary of the table properties is displayed. If everything looks as expected, choose Finish. Otherwise, choose Back and make edits as needed.

Congratulations, you've successfully created a table manually and associated it to a database. Your newly created table will appear in the Tables dashboard. From the dashboard, you can modify and manage all your tables.

For more information, see Working with Tables in the AWS Glue Console.

Next steps

Now that the Data Catalog is populated, you can begin authoring jobs in AWS Glue. See Authoring Jobs.

In addition to using the console, there are other ways to define tables in the Data Catalog including:

  • Creating and running a crawler

  • Using the AWS Glue Table API

  • Using the AWS Glue Data Catalog template

  • Migrating an Apache Hive metastore

  • Using the AWS CLI, Boto3, or data definition language (DDL)

    The following are examples of how you can use the CLI, Boto3, or DDL to define a table based on the same flights_data.csv file from the S3 bucket that you used in the tutorial.

    { "Name": "flights_data_cli", "StorageDescriptor": { "Columns": [ { "Name": "year", "Type": "bigint" }, { "Name": "quarter", "Type": "bigint" } ], "Location": "s3://crawler-public-us-west-2/flight/2016/csv", "InputFormat": "org.apache.hadoop.mapred.TextInputFormat", "OutputFormat": "", "Compressed": false, "NumberOfBuckets": -1, "SerdeInfo": { "SerializationLibrary": "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe", "Parameters": { "field.delim": ",", "serialization.format": "," } } }, "PartitionKeys": [ { "Name": "mon", "Type": "string" } ], "TableType": "EXTERNAL_TABLE", "Parameters": { "EXTERNAL": "TRUE", "classification": "csv", "columnsOrdered": "true", "compressionType": "none", "delimiter": ",", "skip.header.line.count": "1", "typeOfData": "file" } }
    import boto3 glue_client = boto3.client("glue") response = glue_client.create_table( DatabaseName='sampledb', TableInput={ 'Name': 'flights_data_manual', 'StorageDescriptor': { 'Columns': [{ 'Name': 'year', 'Type': 'bigint' }, { 'Name': 'quarter', 'Type': 'bigint' }], 'Location': 's3://crawler-public-us-west-2/flight/2016/csv', 'InputFormat': 'org.apache.hadoop.mapred.TextInputFormat', 'OutputFormat': '', 'Compressed': False, 'NumberOfBuckets': -1, 'SerdeInfo': { 'SerializationLibrary': 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe', 'Parameters': { 'field.delim': ',', 'serialization.format': ',' } }, }, 'PartitionKeys': [{ 'Name': 'mon', 'Type': 'string' }], 'TableType': 'EXTERNAL_TABLE', 'Parameters': { 'EXTERNAL': 'TRUE', 'classification': 'csv', 'columnsOrdered': 'true', 'compressionType': 'none', 'delimiter': ',', 'skip.header.line.count': '1', 'typeOfData': 'file' } } )
    CREATE EXTERNAL TABLE `sampledb`.`flights_data` ( `year` bigint, `quarter` bigint) PARTITIONED BY ( `mon` string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT '' LOCATION 's3://crawler-public-us-west-2/flight/2016/csv/' TBLPROPERTIES ( 'classification'='csv', 'columnsOrdered'='true', 'compressionType'='none', 'delimiter'=',', 'skip.header.line.count'='1', 'typeOfData'='file')