Prerequisite
A QuickBooks object you would like to read from.
Supported entities for source:
Entity | Can be filtered | Supports limit | Supports Order by | Supports Select * | Supports partitioning |
---|---|---|---|---|---|
Account | Yes | Yes | Yes | Yes | Yes |
Bill | Yes | Yes | Yes | Yes | Yes |
Company Info | No | No | No | Yes | No |
Customer | Yes | Yes | Yes | Yes | Yes |
Employee | Yes | Yes | Yes | Yes | Yes |
Estimate | Yes | Yes | Yes | Yes | Yes |
Invoice | Yes | Yes | Yes | Yes | Yes |
Item | Yes | Yes | Yes | Yes | Yes |
Payment | Yes | Yes | Yes | Yes | Yes |
Preferences | No | No | No | Yes | No |
Profit and Loss | Yes | No | No | Yes | No |
Tax Agency | Yes | Yes | Yes | Yes | Yes |
Vendors | Yes | Yes | Yes | Yes | Yes |
Example:
QuickBooks_read = glueContext.create_dynamic_frame.from_options(
connection_type="quickbooks",
connection_options={
"connectionName": "connectionName",
"ENTITY_NAME": "Account",
"API_VERSION": "v3"
}
QuickBooks entity and field details:
For more information about the entities and field details see:
Partitioning queries
Field-based partitioning:
In QuickBooks, the Integer and DateTime datatype fields support field-based partitioning.
You can provide the additional Spark options PARTITION_FIELD
,
LOWER_BOUND
, UPPER_BOUND
, and
NUM_PARTITIONS
if you want to utilize concurrency in Spark. With
these parameters, the original query would be split into NUM_PARTITIONS
number of sub-queries that can be executed by Spark tasks
concurrently.
PARTITION_FIELD
: the name of the field to be used to partition the query.LOWER_BOUND
: an inclusive lower bound value of the chosen partition field.For the Datetime field, we accept the Spark timestamp format used in Spark SQL queries.
Examples of valid value:
"2024-05-07T02:03:00.00Z"
UPPER_BOUND
: an exclusive upper bound value of the chosen partition field.NUM_PARTITIONS
: the number of partitions.
Example:
QuickBooks_read = glueContext.create_dynamic_frame.from_options(
connection_type="quickbooks",
connection_options={
"connectionName": "connectionName",
"REALMID": "12345678690123456789",
"ENTITY_NAME": "Account",
"API_VERSION": "v3",
"PARTITION_FIELD": "MetaData_CreateTime"
"LOWER_BOUND": "2023-09-07T02:03:00.000Z"
"UPPER_BOUND": "2024-05-07T02:03:00.000Z"
"NUM_PARTITIONS": "10"
}
Record-based partitioning:
The original query is splitinto NUM_PARTITIONS
number of sub-queries that can be executed by Spark tasks concurrently:
NUM_PARTITIONS
: the number of partitions.
Example:
QuickBooks_read = glueContext.create_dynamic_frame.from_options(
connection_type="quickbooks",
connection_options={
"connectionName": "connectionName",
"REALMID": "1234567890123456789",
"ENTITY_NAME": "Bill",
"API_VERSION": "v3",
"NUM_PARTITIONS": "10"
}