Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

Reading from QuickBooks entities

Focus mode
Reading from QuickBooks entities - AWS Glue

Prerequisite

A QuickBooks object you would like to read from.

Supported entities for source:

Entity Can be filtered Supports limit Supports Order by Supports Select * Supports partitioning
Account Yes Yes Yes Yes Yes
Bill Yes Yes Yes Yes Yes
Company Info No No No Yes No
Customer Yes Yes Yes Yes Yes
Employee Yes Yes Yes Yes Yes
Estimate Yes Yes Yes Yes Yes
Invoice Yes Yes Yes Yes Yes
Item Yes Yes Yes Yes Yes
Payment Yes Yes Yes Yes Yes
Preferences No No No Yes No
Profit and Loss Yes No No Yes No
Tax Agency Yes Yes Yes Yes Yes
Vendors Yes Yes Yes Yes Yes

Example:

QuickBooks_read = glueContext.create_dynamic_frame.from_options( connection_type="quickbooks", connection_options={ "connectionName": "connectionName", "ENTITY_NAME": "Account", "API_VERSION": "v3" }

QuickBooks entity and field details:

For more information about the entities and field details see:

Partitioning queries

Field-based partitioning:

In QuickBooks, the Integer and DateTime datatype fields support field-based partitioning.

You can provide the additional Spark options PARTITION_FIELD, LOWER_BOUND, UPPER_BOUND, and NUM_PARTITIONS if you want to utilize concurrency in Spark. With these parameters, the original query would be split into NUM_PARTITIONS number of sub-queries that can be executed by Spark tasks concurrently.

  • PARTITION_FIELD: the name of the field to be used to partition the query.

  • LOWER_BOUND: an inclusive lower bound value of the chosen partition field.

    For the Datetime field, we accept the Spark timestamp format used in Spark SQL queries.

    Examples of valid value:

    "2024-05-07T02:03:00.00Z"
  • UPPER_BOUND: an exclusive upper bound value of the chosen partition field.

  • NUM_PARTITIONS: the number of partitions.

Example:

QuickBooks_read = glueContext.create_dynamic_frame.from_options( connection_type="quickbooks", connection_options={ "connectionName": "connectionName", "REALMID": "12345678690123456789", "ENTITY_NAME": "Account", "API_VERSION": "v3", "PARTITION_FIELD": "MetaData_CreateTime" "LOWER_BOUND": "2023-09-07T02:03:00.000Z" "UPPER_BOUND": "2024-05-07T02:03:00.000Z" "NUM_PARTITIONS": "10" }

Record-based partitioning:

The original query is splitinto NUM_PARTITIONS number of sub-queries that can be executed by Spark tasks concurrently:

  • NUM_PARTITIONS: the number of partitions.

Example:

QuickBooks_read = glueContext.create_dynamic_frame.from_options( connection_type="quickbooks", connection_options={ "connectionName": "connectionName", "REALMID": "1234567890123456789", "ENTITY_NAME": "Bill", "API_VERSION": "v3", "NUM_PARTITIONS": "10" }
PrivacySite termsCookie preferences
© 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved.