Class: Aws::DynamoDB::Client
- Inherits:
-
Seahorse::Client::Base
- Object
- Seahorse::Client::Base
- Aws::DynamoDB::Client
- Includes:
- ClientStubs
- Defined in:
- gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb,
gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/customizations/client.rb
Overview
An API client for DynamoDB. To construct a client, you need to configure a :region
and :credentials
.
client = Aws::DynamoDB::Client.new(
region: region_name,
credentials: credentials,
# ...
)
For details on configuring region and credentials see the developer guide.
See #initialize for a full list of supported configuration options.
Instance Attribute Summary
Attributes inherited from Seahorse::Client::Base
API Operations collapse
-
#batch_execute_statement(params = {}) ⇒ Types::BatchExecuteStatementOutput
This operation allows you to perform batch reads or writes on data stored in DynamoDB, using PartiQL.
-
#batch_get_item(params = {}) ⇒ Types::BatchGetItemOutput
The
BatchGetItem
operation returns the attributes of one or more items from one or more tables. -
#batch_write_item(params = {}) ⇒ Types::BatchWriteItemOutput
The
BatchWriteItem
operation puts or deletes multiple items in one or more tables. -
#create_backup(params = {}) ⇒ Types::CreateBackupOutput
Creates a backup for an existing table.
-
#create_global_table(params = {}) ⇒ Types::CreateGlobalTableOutput
Creates a global table from an existing table.
-
#create_table(params = {}) ⇒ Types::CreateTableOutput
The
CreateTable
operation adds a new table to your account. -
#delete_backup(params = {}) ⇒ Types::DeleteBackupOutput
Deletes an existing backup of a table.
-
#delete_item(params = {}) ⇒ Types::DeleteItemOutput
Deletes a single item in a table by primary key.
-
#delete_resource_policy(params = {}) ⇒ Types::DeleteResourcePolicyOutput
Deletes the resource-based policy attached to the resource, which can be a table or stream.
-
#delete_table(params = {}) ⇒ Types::DeleteTableOutput
The
DeleteTable
operation deletes a table and all of its items. -
#describe_backup(params = {}) ⇒ Types::DescribeBackupOutput
Describes an existing backup of a table.
-
#describe_continuous_backups(params = {}) ⇒ Types::DescribeContinuousBackupsOutput
Checks the status of continuous backups and point in time recovery on the specified table.
-
#describe_contributor_insights(params = {}) ⇒ Types::DescribeContributorInsightsOutput
Returns information about contributor insights for a given table or global secondary index.
-
#describe_endpoints(params = {}) ⇒ Types::DescribeEndpointsResponse
Returns the regional endpoint information.
-
#describe_export(params = {}) ⇒ Types::DescribeExportOutput
Describes an existing table export.
-
#describe_global_table(params = {}) ⇒ Types::DescribeGlobalTableOutput
Returns information about the specified global table.
-
#describe_global_table_settings(params = {}) ⇒ Types::DescribeGlobalTableSettingsOutput
Describes Region-specific settings for a global table.
-
#describe_import(params = {}) ⇒ Types::DescribeImportOutput
Represents the properties of the import.
-
#describe_kinesis_streaming_destination(params = {}) ⇒ Types::DescribeKinesisStreamingDestinationOutput
Returns information about the status of Kinesis streaming.
-
#describe_limits(params = {}) ⇒ Types::DescribeLimitsOutput
Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the Region as a whole and for any one DynamoDB table that you create there.
-
#describe_table(params = {}) ⇒ Types::DescribeTableOutput
Returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table.
-
#describe_table_replica_auto_scaling(params = {}) ⇒ Types::DescribeTableReplicaAutoScalingOutput
Describes auto scaling settings across replicas of the global table at once.
-
#describe_time_to_live(params = {}) ⇒ Types::DescribeTimeToLiveOutput
Gives a description of the Time to Live (TTL) status on the specified table.
-
#disable_kinesis_streaming_destination(params = {}) ⇒ Types::KinesisStreamingDestinationOutput
Stops replication from the DynamoDB table to the Kinesis data stream.
-
#enable_kinesis_streaming_destination(params = {}) ⇒ Types::KinesisStreamingDestinationOutput
Starts table data replication to the specified Kinesis data stream at a timestamp chosen during the enable workflow.
-
#execute_statement(params = {}) ⇒ Types::ExecuteStatementOutput
This operation allows you to perform reads and singleton writes on data stored in DynamoDB, using PartiQL.
-
#execute_transaction(params = {}) ⇒ Types::ExecuteTransactionOutput
This operation allows you to perform transactional reads or writes on data stored in DynamoDB, using PartiQL.
-
#export_table_to_point_in_time(params = {}) ⇒ Types::ExportTableToPointInTimeOutput
Exports table data to an S3 bucket.
-
#get_item(params = {}) ⇒ Types::GetItemOutput
The
GetItem
operation returns a set of attributes for the item with the given primary key. -
#get_resource_policy(params = {}) ⇒ Types::GetResourcePolicyOutput
Returns the resource-based policy document attached to the resource, which can be a table or stream, in JSON format.
-
#import_table(params = {}) ⇒ Types::ImportTableOutput
Imports table data from an S3 bucket.
-
#list_backups(params = {}) ⇒ Types::ListBackupsOutput
List DynamoDB backups that are associated with an Amazon Web Services account and weren't made with Amazon Web Services Backup.
-
#list_contributor_insights(params = {}) ⇒ Types::ListContributorInsightsOutput
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
-
#list_exports(params = {}) ⇒ Types::ListExportsOutput
Lists completed exports within the past 90 days.
-
#list_global_tables(params = {}) ⇒ Types::ListGlobalTablesOutput
Lists all global tables that have a replica in the specified Region.
-
#list_imports(params = {}) ⇒ Types::ListImportsOutput
Lists completed imports within the past 90 days.
-
#list_tables(params = {}) ⇒ Types::ListTablesOutput
Returns an array of table names associated with the current account and endpoint.
-
#list_tags_of_resource(params = {}) ⇒ Types::ListTagsOfResourceOutput
List all tags on an Amazon DynamoDB resource.
-
#put_item(params = {}) ⇒ Types::PutItemOutput
Creates a new item, or replaces an old item with a new item.
-
#put_resource_policy(params = {}) ⇒ Types::PutResourcePolicyOutput
Attaches a resource-based policy document to the resource, which can be a table or stream.
-
#query(params = {}) ⇒ Types::QueryOutput
You must provide the name of the partition key attribute and a single value for that attribute.
-
#restore_table_from_backup(params = {}) ⇒ Types::RestoreTableFromBackupOutput
Creates a new table from an existing backup.
-
#restore_table_to_point_in_time(params = {}) ⇒ Types::RestoreTableToPointInTimeOutput
Restores the specified table to the specified point in time within
EarliestRestorableDateTime
andLatestRestorableDateTime
. -
#scan(params = {}) ⇒ Types::ScanOutput
The
Scan
operation returns one or more items and item attributes by accessing every item in a table or a secondary index. -
#tag_resource(params = {}) ⇒ Struct
Associate a set of tags with an Amazon DynamoDB resource.
-
#transact_get_items(params = {}) ⇒ Types::TransactGetItemsOutput
TransactGetItems
is a synchronous operation that atomically retrieves multiple items from one or more tables (but not from indexes) in a single account and Region. -
#transact_write_items(params = {}) ⇒ Types::TransactWriteItemsOutput
TransactWriteItems
is a synchronous write operation that groups up to 100 action requests. -
#untag_resource(params = {}) ⇒ Struct
Removes the association of tags from an Amazon DynamoDB resource.
-
#update_continuous_backups(params = {}) ⇒ Types::UpdateContinuousBackupsOutput
UpdateContinuousBackups
enables or disables point in time recovery for the specified table. -
#update_contributor_insights(params = {}) ⇒ Types::UpdateContributorInsightsOutput
Updates the status for contributor insights for a specific table or index.
-
#update_global_table(params = {}) ⇒ Types::UpdateGlobalTableOutput
Adds or removes replicas in the specified global table.
-
#update_global_table_settings(params = {}) ⇒ Types::UpdateGlobalTableSettingsOutput
Updates settings for a global table.
-
#update_item(params = {}) ⇒ Types::UpdateItemOutput
Edits an existing item's attributes, or adds a new item to the table if it does not already exist.
-
#update_kinesis_streaming_destination(params = {}) ⇒ Types::UpdateKinesisStreamingDestinationOutput
The command to update the Kinesis stream destination.
-
#update_table(params = {}) ⇒ Types::UpdateTableOutput
Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a given table.
-
#update_table_replica_auto_scaling(params = {}) ⇒ Types::UpdateTableReplicaAutoScalingOutput
Updates auto scaling settings on your global tables at once.
-
#update_time_to_live(params = {}) ⇒ Types::UpdateTimeToLiveOutput
The
UpdateTimeToLive
method enables or disables Time to Live (TTL) for the specified table.
Instance Method Summary collapse
-
#initialize(options) ⇒ Client
constructor
A new instance of Client.
-
#stub_data(operation_name, data = {}) ⇒ Object
-
#wait_until(waiter_name, params = {}, options = {}) {|w.waiter| ... } ⇒ Boolean
Polls an API operation until a resource enters a desired state.
Methods included from ClientStubs
#api_requests, #stub_responses
Methods inherited from Seahorse::Client::Base
add_plugin, api, clear_plugins, define, new, #operation_names, plugins, remove_plugin, set_api, set_plugins
Methods included from Seahorse::Client::HandlerBuilder
#handle, #handle_request, #handle_response
Constructor Details
#initialize(options) ⇒ Client
Returns a new instance of Client.
479 480 481 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 479 def initialize(*args) super end |
Instance Method Details
#batch_execute_statement(params = {}) ⇒ Types::BatchExecuteStatementOutput
This operation allows you to perform batch reads or writes on data
stored in DynamoDB, using PartiQL. Each read statement in a
BatchExecuteStatement
must specify an equality condition on all key
attributes. This enforces that each SELECT
statement in a batch
returns at most a single item. For more information, see Running
batch operations with PartiQL for DynamoDB .
A HTTP 200 response does not mean that all statements in the
BatchExecuteStatement succeeded. Error details for individual
statements can be found under the Error field of the
BatchStatementResponse
for each statement.
577 578 579 580 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 577 def batch_execute_statement(params = {}, = {}) req = build_request(:batch_execute_statement, params) req.send_request() end |
#batch_get_item(params = {}) ⇒ Types::BatchGetItemOutput
The BatchGetItem
operation returns the attributes of one or more
items from one or more tables. You identify requested items by primary
key.
A single operation can retrieve up to 16 MB of data, which can contain
as many as 100 items. BatchGetItem
returns a partial result if the
response size limit is exceeded, the table's provisioned throughput
is exceeded, more than 1MB per partition is requested, or an internal
processing failure occurs. If a partial result is returned, the
operation returns a value for UnprocessedKeys
. You can use this
value to retry the operation starting with the next item to get.
If you request more than 100 items, BatchGetItem
returns a
ValidationException
with the message "Too many items requested for
the BatchGetItem call."
For example, if you ask to retrieve 100 items, but each individual
item is 300 KB in size, the system returns 52 items (so as not to
exceed the 16 MB limit). It also returns an appropriate
UnprocessedKeys
value so you can get the next page of results. If
desired, your application can include its own logic to assemble the
pages of results into one dataset.
If none of the items can be processed due to insufficient
provisioned throughput on all of the tables in the request, then
BatchGetItem
returns a ProvisionedThroughputExceededException
. If
at least one of the items is successfully processed, then
BatchGetItem
completes successfully, while returning the keys of the
unread items in UnprocessedKeys
.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default, BatchGetItem
performs eventually consistent reads on
every table in the request. If you want strongly consistent reads
instead, you can set ConsistentRead
to true
for any or all tables.
In order to minimize response latency, BatchGetItem
may retrieve
items in parallel.
When designing your application, keep in mind that DynamoDB does not
return items in any particular order. To help parse the response by
item, include the primary key values for the items in your request in
the ProjectionExpression
parameter.
If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
856 857 858 859 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 856 def batch_get_item(params = {}, = {}) req = build_request(:batch_get_item, params) req.send_request() end |
#batch_write_item(params = {}) ⇒ Types::BatchWriteItemOutput
The BatchWriteItem
operation puts or deletes multiple items in one
or more tables. A single call to BatchWriteItem
can transmit up to
16MB of data over the network, consisting of up to 25 item put or
delete operations. While individual items can be up to 400 KB once
stored, it's important to note that an item's representation might
be greater than 400KB while being sent in DynamoDB's JSON format for
the API call. For more details on this distinction, see Naming Rules
and Data Types.
BatchWriteItem
cannot update items. If you perform a
BatchWriteItem
operation on an existing item, that item's values
will be overwritten by the operation and it will appear like it was
updated. To update items, we recommend you use the UpdateItem
action.
The individual PutItem
and DeleteItem
operations specified in
BatchWriteItem
are atomic; however BatchWriteItem
as a whole is
not. If any requested operations fail because the table's provisioned
throughput is exceeded or an internal processing failure occurs, the
failed operations are returned in the UnprocessedItems
response
parameter. You can investigate and optionally resend the requests.
Typically, you would call BatchWriteItem
in a loop. Each iteration
would check for unprocessed items and submit a new BatchWriteItem
request with those unprocessed items until all items have been
processed.
For tables and indexes with provisioned capacity, if none of the items
can be processed due to insufficient provisioned throughput on all of
the tables in the request, then BatchWriteItem
returns a
ProvisionedThroughputExceededException
. For all tables and indexes,
if none of the items can be processed due to other throttling
scenarios (such as exceeding partition level limits), then
BatchWriteItem
returns a ThrottlingException
.
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
With BatchWriteItem
, you can efficiently write or delete large
amounts of data, such as from Amazon EMR, or copy data from another
database into DynamoDB. In order to improve performance with these
large-scale operations, BatchWriteItem
does not behave in the same
way as individual PutItem
and DeleteItem
calls would. For example,
you cannot specify conditions on individual put and delete requests,
and BatchWriteItem
does not return deleted items in the response.
If you use a programming language that supports concurrency, you can
use threads to write items in parallel. Your application must include
the necessary logic to manage the threads. With languages that don't
support threading, you must update or delete the specified items one
at a time. In both situations, BatchWriteItem
performs the specified
put and delete operations in parallel, giving you the power of the
thread pool approach without having to introduce complexity into your
application.
Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.
If one or more of the following is true, DynamoDB rejects the entire batch write operation:
One or more tables specified in the
BatchWriteItem
request does not exist.Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key schema.
You try to perform multiple operations on the same item in the same
BatchWriteItem
request. For example, you cannot put and delete the same item in the sameBatchWriteItem
request.Your request contains at least two items with identical hash and range keys (which essentially is two put operations).
There are more than 25 requests in the batch.
Any individual item in a batch exceeds 400 KB.
The total request size exceeds 16 MB.
Any individual items with keys exceeding the key length limits. For a partition key, the limit is 2048 bytes and for a sort key, the limit is 1024 bytes.
1122 1123 1124 1125 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 1122 def batch_write_item(params = {}, = {}) req = build_request(:batch_write_item, params) req.send_request() end |
#create_backup(params = {}) ⇒ Types::CreateBackupOutput
Creates a backup for an existing table.
Each time you create an on-demand backup, the entire table data is backed up. There is no limit to the number of on-demand backups that can be taken.
When you create an on-demand backup, a time marker of the request is cataloged, and the backup is created asynchronously, by applying all changes until the time of the request to the last full table snapshot. Backup requests are processed instantaneously and become available for restore within minutes.
You can call CreateBackup
at a maximum rate of 50 times per second.
All backups in DynamoDB work without consuming any provisioned throughput on the table.
If you submit a backup request on 2018-12-14 at 14:25:00, the backup is guaranteed to contain all data committed to the table up to 14:24:00, and data committed after 14:26:00 will not be. The backup might contain data modifications made between 14:24:00 and 14:26:00. On-demand backup does not support causal consistency.
Along with data, the following are also included on the backups:
Global secondary indexes (GSIs)
Local secondary indexes (LSIs)
Streams
Provisioned read and write capacity
1192 1193 1194 1195 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 1192 def create_backup(params = {}, = {}) req = build_request(:create_backup, params) req.send_request() end |
#create_global_table(params = {}) ⇒ Types::CreateGlobalTableOutput
Creates a global table from an existing table. A global table creates a replication relationship between two or more DynamoDB tables with the same table name in the provided Regions.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should be avoided for new global tables. Customers should use Global Tables version 2019.11.21 (Current) when possible, because it provides greater flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining the global table version you are using. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading global tables.
If you want to add a new replica table to a global table, each of the following conditions must be true:
The table must have the same primary key as all of the other replicas.
The table must have the same name as all of the other replicas.
The table must have DynamoDB Streams enabled, with the stream containing both the new and the old images of the item.
None of the replica tables in the global table can contain any data.
If global secondary indexes are specified, then the following conditions must also be met:
The global secondary indexes must have the same name.
The global secondary indexes must have the same hash key and sort key (if present).
If local secondary indexes are specified, then the following conditions must also be met:
The local secondary indexes must have the same name.
The local secondary indexes must have the same hash key and sort key (if present).
Write capacity settings should be set consistently across your replica tables and secondary indexes. DynamoDB strongly recommends enabling auto scaling to manage the write capacity settings for all of your global tables replicas and indexes.
If you prefer to manage write capacity settings manually, you should provision equal replicated write capacity units to your replica tables. You should also provision equal replicated write capacity units to matching secondary indexes across your global table.
1310 1311 1312 1313 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 1310 def create_global_table(params = {}, = {}) req = build_request(:create_global_table, params) req.send_request() end |
#create_table(params = {}) ⇒ Types::CreateTableOutput
The CreateTable
operation adds a new table to your account. In an
Amazon Web Services account, table names must be unique within each
Region. That is, you can have two tables with same name if you create
the tables in different Regions.
CreateTable
is an asynchronous operation. Upon receiving a
CreateTable
request, DynamoDB immediately returns a response with a
TableStatus
of CREATING
. After the table is created, DynamoDB sets
the TableStatus
to ACTIVE
. You can perform read and write
operations only on an ACTIVE
table.
You can optionally define secondary indexes on the new table, as part
of the CreateTable
operation. If you want to create multiple tables
with secondary indexes on them, you must create the tables
sequentially. Only one table with secondary indexes can be in the
CREATING
state at any given time.
You can use the DescribeTable
action to check the table status.
1842 1843 1844 1845 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 1842 def create_table(params = {}, = {}) req = build_request(:create_table, params) req.send_request() end |
#delete_backup(params = {}) ⇒ Types::DeleteBackupOutput
Deletes an existing backup of a table.
You can call DeleteBackup
at a maximum rate of 10 times per second.
1920 1921 1922 1923 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 1920 def delete_backup(params = {}, = {}) req = build_request(:delete_backup, params) req.send_request() end |
#delete_item(params = {}) ⇒ Types::DeleteItemOutput
Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's
attribute values in the same operation, using the ReturnValues
parameter.
Unless you specify conditions, the DeleteItem
is an idempotent
operation; running it multiple times on the same item or attribute
does not result in an error response.
Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted.
2206 2207 2208 2209 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2206 def delete_item(params = {}, = {}) req = build_request(:delete_item, params) req.send_request() end |
#delete_resource_policy(params = {}) ⇒ Types::DeleteResourcePolicyOutput
Deletes the resource-based policy attached to the resource, which can be a table or stream.
DeleteResourcePolicy
is an idempotent operation; running it multiple
times on the same resource doesn't result in an error response,
unless you specify an ExpectedRevisionId
, which will then return a
PolicyNotFoundException
.
To make sure that you don't inadvertently lock yourself out of your
own resources, the root principal in your Amazon Web Services account
can perform DeleteResourcePolicy
requests, even if your
resource-based policy explicitly denies the root principal's access.
DeleteResourcePolicy
is an asynchronous operation. If you issue a
GetResourcePolicy
request immediately after running the
DeleteResourcePolicy
request, DynamoDB might still return the
deleted policy. This is because the policy for your resource might not
have been deleted yet. Wait for a few seconds, and then try the
GetResourcePolicy
request again.
2267 2268 2269 2270 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2267 def delete_resource_policy(params = {}, = {}) req = build_request(:delete_resource_policy, params) req.send_request() end |
#delete_table(params = {}) ⇒ Types::DeleteTableOutput
The DeleteTable
operation deletes a table and all of its items.
After a DeleteTable
request, the specified table is in the
DELETING
state until DynamoDB completes the deletion. If the table
is in the ACTIVE
state, you can delete it. If a table is in
CREATING
or UPDATING
states, then DynamoDB returns a
ResourceInUseException
. If the specified table does not exist,
DynamoDB returns a ResourceNotFoundException
. If table is already in
the DELETING
state, no error is returned.
For global tables, this operation only applies to global tables using Version 2019.11.21 (Current version).
GetItem
and PutItem
, on a table in the DELETING
state until
the table deletion is complete. For the full list of table states, see
TableStatus.
When you delete a table, any indexes on that table are also deleted.
If you have DynamoDB Streams enabled on the table, then the
corresponding stream on that table goes into the DISABLED
state, and
the stream is automatically deleted after 24 hours.
Use the DescribeTable
action to check the status of the table.
2447 2448 2449 2450 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2447 def delete_table(params = {}, = {}) req = build_request(:delete_table, params) req.send_request() end |
#describe_backup(params = {}) ⇒ Types::DescribeBackupOutput
Describes an existing backup of a table.
You can call DescribeBackup
at a maximum rate of 10 times per
second.
2526 2527 2528 2529 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2526 def describe_backup(params = {}, = {}) req = build_request(:describe_backup, params) req.send_request() end |
#describe_continuous_backups(params = {}) ⇒ Types::DescribeContinuousBackupsOutput
Checks the status of continuous backups and point in time recovery on
the specified table. Continuous backups are ENABLED
on all tables at
table creation. If point in time recovery is enabled,
PointInTimeRecoveryStatus
will be set to ENABLED.
After continuous backups and point in time recovery are enabled, you
can restore to any point in time within EarliestRestorableDateTime
and LatestRestorableDateTime
.
LatestRestorableDateTime
is typically 5 minutes before the current
time. You can restore your table to any point in time during the last
35 days.
You can call DescribeContinuousBackups
at a maximum rate of 10 times
per second.
2575 2576 2577 2578 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2575 def describe_continuous_backups(params = {}, = {}) req = build_request(:describe_continuous_backups, params) req.send_request() end |
#describe_contributor_insights(params = {}) ⇒ Types::DescribeContributorInsightsOutput
Returns information about contributor insights for a given table or global secondary index.
2621 2622 2623 2624 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2621 def describe_contributor_insights(params = {}, = {}) req = build_request(:describe_contributor_insights, params) req.send_request() end |
#describe_endpoints(params = {}) ⇒ Types::DescribeEndpointsResponse
Returns the regional endpoint information. For more information on policy permissions, please see Internetwork traffic privacy.
2647 2648 2649 2650 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2647 def describe_endpoints(params = {}, = {}) req = build_request(:describe_endpoints, params) req.send_request() end |
#describe_export(params = {}) ⇒ Types::DescribeExportOutput
Describes an existing table export.
2697 2698 2699 2700 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2697 def describe_export(params = {}, = {}) req = build_request(:describe_export, params) req.send_request() end |
#describe_global_table(params = {}) ⇒ Types::DescribeGlobalTableOutput
Returns information about the specified global table.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should be avoided for new global tables. Customers should use Global Tables version 2019.11.21 (Current) when possible, because it provides greater flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining the global table version you are using. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading global tables.
2766 2767 2768 2769 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2766 def describe_global_table(params = {}, = {}) req = build_request(:describe_global_table, params) req.send_request() end |
#describe_global_table_settings(params = {}) ⇒ Types::DescribeGlobalTableSettingsOutput
Describes Region-specific settings for a global table.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should be avoided for new global tables. Customers should use Global Tables version 2019.11.21 (Current) when possible, because it provides greater flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining the global table version you are using. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading global tables.
2866 2867 2868 2869 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2866 def describe_global_table_settings(params = {}, = {}) req = build_request(:describe_global_table_settings, params) req.send_request() end |
#describe_import(params = {}) ⇒ Types::DescribeImportOutput
Represents the properties of the import.
2945 2946 2947 2948 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2945 def describe_import(params = {}, = {}) req = build_request(:describe_import, params) req.send_request() end |
#describe_kinesis_streaming_destination(params = {}) ⇒ Types::DescribeKinesisStreamingDestinationOutput
Returns information about the status of Kinesis streaming.
2980 2981 2982 2983 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 2980 def describe_kinesis_streaming_destination(params = {}, = {}) req = build_request(:describe_kinesis_streaming_destination, params) req.send_request() end |
#describe_limits(params = {}) ⇒ Types::DescribeLimitsOutput
Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the Region as a whole and for any one DynamoDB table that you create there.
When you establish an Amazon Web Services account, the account has initial quotas on the maximum read capacity units and write capacity units that you can provision across all of your DynamoDB tables in a given Region. Also, there are per-table quotas that apply when you create a table there. For more information, see Service, Account, and Table Quotas page in the Amazon DynamoDB Developer Guide.
Although you can increase these quotas by filing a case at Amazon Web
Services Support Center, obtaining the increase is not
instantaneous. The DescribeLimits
action lets you write code to
compare the capacity you are currently using to those quotas imposed
by your account so that you have enough time to apply for an increase
before you hit a quota.
For example, you could use one of the Amazon Web Services SDKs to do the following:
Call
DescribeLimits
for a particular Region to obtain your current account quotas on provisioned capacity there.Create a variable to hold the aggregate read capacity units provisioned for all your tables in that Region, and one to hold the aggregate write capacity units. Zero them both.
Call
ListTables
to obtain a list of all your DynamoDB tables.For each table name listed by
ListTables
, do the following:Call
DescribeTable
with the table name.Use the data returned by
DescribeTable
to add the read capacity units and write capacity units provisioned for the table itself to your variables.If the table has one or more global secondary indexes (GSIs), loop over these GSIs and add their provisioned capacity values to your variables as well.
- Report the account quotas for that Region returned by
DescribeLimits
, along with the total current provisioned capacity levels you have calculated.
This will let you see whether you are getting close to your account-level quotas.
The per-table quotas apply only when you are creating a new table. They restrict the sum of the provisioned capacity of the new table itself and all its global secondary indexes.
For existing tables and their GSIs, DynamoDB doesn't let you increase provisioned capacity extremely rapidly, but the only quota that applies is that the aggregate provisioned capacity over all your tables and GSIs cannot exceed either of the per-account quotas.
DescribeLimits
should only be called periodically. You can expect
throttling errors if you call it more than once in a minute.
The DescribeLimits
Request element has no content.
3089 3090 3091 3092 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3089 def describe_limits(params = {}, = {}) req = build_request(:describe_limits, params) req.send_request() end |
#describe_table(params = {}) ⇒ Types::DescribeTableOutput
Returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table.
For global tables, this operation only applies to global tables using Version 2019.11.21 (Current version).
DescribeTable
request immediately after a
CreateTable
request, DynamoDB might return a
ResourceNotFoundException
. This is because DescribeTable
uses an
eventually consistent query, and the metadata for your table might not
be available at that moment. Wait for a few seconds, and then try the
DescribeTable
request again.
The following waiters are defined for this operation (see #wait_until for detailed usage):
- table_exists
- table_not_exists
3281 3282 3283 3284 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3281 def describe_table(params = {}, = {}) req = build_request(:describe_table, params) req.send_request() end |
#describe_table_replica_auto_scaling(params = {}) ⇒ Types::DescribeTableReplicaAutoScalingOutput
Describes auto scaling settings across replicas of the global table at once.
For global tables, this operation only applies to global tables using Version 2019.11.21 (Current version).
3361 3362 3363 3364 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3361 def describe_table_replica_auto_scaling(params = {}, = {}) req = build_request(:describe_table_replica_auto_scaling, params) req.send_request() end |
#describe_time_to_live(params = {}) ⇒ Types::DescribeTimeToLiveOutput
Gives a description of the Time to Live (TTL) status on the specified table.
3392 3393 3394 3395 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3392 def describe_time_to_live(params = {}, = {}) req = build_request(:describe_time_to_live, params) req.send_request() end |
#disable_kinesis_streaming_destination(params = {}) ⇒ Types::KinesisStreamingDestinationOutput
Stops replication from the DynamoDB table to the Kinesis data stream. This is done without deleting either of the resources.
3439 3440 3441 3442 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3439 def disable_kinesis_streaming_destination(params = {}, = {}) req = build_request(:disable_kinesis_streaming_destination, params) req.send_request() end |
#enable_kinesis_streaming_destination(params = {}) ⇒ Types::KinesisStreamingDestinationOutput
Starts table data replication to the specified Kinesis data stream at a timestamp chosen during the enable workflow. If this operation doesn't return results immediately, use DescribeKinesisStreamingDestination to check if streaming to the Kinesis data stream is ACTIVE.
3489 3490 3491 3492 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3489 def enable_kinesis_streaming_destination(params = {}, = {}) req = build_request(:enable_kinesis_streaming_destination, params) req.send_request() end |
#execute_statement(params = {}) ⇒ Types::ExecuteStatementOutput
This operation allows you to perform reads and singleton writes on data stored in DynamoDB, using PartiQL.
For PartiQL reads (SELECT
statement), if the total number of
processed items exceeds the maximum dataset size limit of 1 MB, the
read stops and results are returned to the user as a
LastEvaluatedKey
value to continue the read in a subsequent
operation. If the filter criteria in WHERE
clause does not match any
data, the read will return an empty result set.
A single SELECT
statement response can return up to the maximum
number of items (if using the Limit parameter) or a maximum of 1 MB of
data (and then apply any filtering to the results using WHERE
clause). If LastEvaluatedKey
is present in the response, you need to
paginate the result set. If NextToken
is present, you need to
paginate the result set and include NextToken
.
3610 3611 3612 3613 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3610 def execute_statement(params = {}, = {}) req = build_request(:execute_statement, params) req.send_request() end |
#execute_transaction(params = {}) ⇒ Types::ExecuteTransactionOutput
This operation allows you to perform transactional reads or writes on data stored in DynamoDB, using PartiQL.
ConditionCheck
in the TransactWriteItems API.
3695 3696 3697 3698 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3695 def execute_transaction(params = {}, = {}) req = build_request(:execute_transaction, params) req.send_request() end |
#export_table_to_point_in_time(params = {}) ⇒ Types::ExportTableToPointInTimeOutput
Exports table data to an S3 bucket. The table must have point in time recovery enabled, and you can export data from any time within the point in time recovery window.
3826 3827 3828 3829 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 3826 def export_table_to_point_in_time(params = {}, = {}) req = build_request(:export_table_to_point_in_time, params) req.send_request() end |
#get_item(params = {}) ⇒ Types::GetItemOutput
The GetItem
operation returns a set of attributes for the item with
the given primary key. If there is no matching item, GetItem
does
not return any data and there will be no Item
element in the
response.
GetItem
provides an eventually consistent read by default. If your
application requires a strongly consistent read, set ConsistentRead
to true
. Although a strongly consistent read might take more time
than an eventually consistent read, it always returns the last updated
value.
4021 4022 4023 4024 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4021 def get_item(params = {}, = {}) req = build_request(:get_item, params) req.send_request() end |
#get_resource_policy(params = {}) ⇒ Types::GetResourcePolicyOutput
Returns the resource-based policy document attached to the resource, which can be a table or stream, in JSON format.
GetResourcePolicy
follows an eventually consistent model.
The following list describes the outcomes when you issue the
GetResourcePolicy
request immediately after issuing another request:
If you issue a
GetResourcePolicy
request immediately after aPutResourcePolicy
request, DynamoDB might return aPolicyNotFoundException
.If you issue a
GetResourcePolicy
request immediately after aDeleteResourcePolicy
request, DynamoDB might return the policy that was present before the deletion request.If you issue a
GetResourcePolicy
request immediately after aCreateTable
request, which includes a resource-based policy, DynamoDB might return aResourceNotFoundException
or aPolicyNotFoundException
.
Because GetResourcePolicy
uses an eventually consistent query, the
metadata for your policy or table might not be available at that
moment. Wait for a few seconds, and then retry the GetResourcePolicy
request.
After a GetResourcePolicy
request returns a policy created using the
PutResourcePolicy
request, the policy will be applied in the
authorization of requests to the resource. Because this process is
eventually consistent, it will take some time to apply the policy to
all requests to a resource. Policies that you attach while creating a
table using the CreateTable
request will always be applied to all
requests for that table.
4088 4089 4090 4091 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4088 def get_resource_policy(params = {}, = {}) req = build_request(:get_resource_policy, params) req.send_request() end |
#import_table(params = {}) ⇒ Types::ImportTableOutput
Imports table data from an S3 bucket.
4267 4268 4269 4270 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4267 def import_table(params = {}, = {}) req = build_request(:import_table, params) req.send_request() end |
#list_backups(params = {}) ⇒ Types::ListBackupsOutput
List DynamoDB backups that are associated with an Amazon Web Services
account and weren't made with Amazon Web Services Backup. To list
these backups for a given table, specify TableName
. ListBackups
returns a paginated list of results with at most 1 MB worth of items
in a page. You can also specify a maximum number of entries to be
returned in a page.
In the request, start time is inclusive, but end time is exclusive. Note that these boundaries are for the time at which the original backup was requested.
You can call ListBackups
a maximum of five times per second.
If you want to retrieve the complete list of backups made with Amazon Web Services Backup, use the Amazon Web Services Backup list API.
4362 4363 4364 4365 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4362 def list_backups(params = {}, = {}) req = build_request(:list_backups, params) req.send_request() end |
#list_contributor_insights(params = {}) ⇒ Types::ListContributorInsightsOutput
Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4407 4408 4409 4410 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4407 def list_contributor_insights(params = {}, = {}) req = build_request(:list_contributor_insights, params) req.send_request() end |
#list_exports(params = {}) ⇒ Types::ListExportsOutput
Lists completed exports within the past 90 days.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4452 4453 4454 4455 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4452 def list_exports(params = {}, = {}) req = build_request(:list_exports, params) req.send_request() end |
#list_global_tables(params = {}) ⇒ Types::ListGlobalTablesOutput
Lists all global tables that have a replica in the specified Region.
This documentation is for version 2017.11.29 (Legacy) of global tables, which should be avoided for new global tables. Customers should use Global Tables version 2019.11.21 (Current) when possible, because it provides greater flexibility, higher efficiency, and consumes less write capacity than 2017.11.29 (Legacy).
To determine which version you're using, see Determining the global table version you are using. To update existing global tables from version 2017.11.29 (Legacy) to version 2019.11.21 (Current), see Upgrading global tables.
4517 4518 4519 4520 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4517 def list_global_tables(params = {}, = {}) req = build_request(:list_global_tables, params) req.send_request() end |
#list_imports(params = {}) ⇒ Types::ListImportsOutput
Lists completed imports within the past 90 days.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4570 4571 4572 4573 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4570 def list_imports(params = {}, = {}) req = build_request(:list_imports, params) req.send_request() end |
#list_tables(params = {}) ⇒ Types::ListTablesOutput
Returns an array of table names associated with the current account
and endpoint. The output from ListTables
is paginated, with each
page returning a maximum of 100 table names.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4630 4631 4632 4633 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4630 def list_tables(params = {}, = {}) req = build_request(:list_tables, params) req.send_request() end |
#list_tags_of_resource(params = {}) ⇒ Types::ListTagsOfResourceOutput
List all tags on an Amazon DynamoDB resource. You can call ListTagsOfResource up to 10 times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
4677 4678 4679 4680 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 4677 def (params = {}, = {}) req = build_request(:list_tags_of_resource, params) req.send_request() end |
#put_item(params = {}) ⇒ Types::PutItemOutput
Creates a new item, or replaces an old item with a new item. If an
item that has the same primary key as the new item already exists in
the specified table, the new item completely replaces the existing
item. You can perform a conditional put operation (add a new item if
one with the specified primary key doesn't exist), or replace an
existing item if it has certain attribute values. You can return the
item's attribute values in the same operation, using the
ReturnValues
parameter.
When you add an item, the primary key attributes are the only required attributes.
Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes cannot be empty.
Invalid Requests with empty values will be rejected with a
ValidationException
exception.
attribute_not_exists
function with the name of the attribute being used as the partition
key for the table. Since every record must contain that attribute, the
attribute_not_exists
function will only succeed if no matching item
exists.
For more information about PutItem
, see Working with Items in
the Amazon DynamoDB Developer Guide.
5006 5007 5008 5009 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 5006 def put_item(params = {}, = {}) req = build_request(:put_item, params) req.send_request() end |
#put_resource_policy(params = {}) ⇒ Types::PutResourcePolicyOutput
Attaches a resource-based policy document to the resource, which can be a table or stream. When you attach a resource-based policy using this API, the policy application is eventually consistent .
PutResourcePolicy
is an idempotent operation; running it multiple
times on the same resource using the same policy document will return
the same revision ID. If you specify an ExpectedRevisionId
that
doesn't match the current policy's RevisionId
, the
PolicyNotFoundException
will be returned.
PutResourcePolicy
is an asynchronous operation. If you issue a
GetResourcePolicy
request immediately after a PutResourcePolicy
request, DynamoDB might return your previous policy, if there was one,
or return the PolicyNotFoundException
. This is because
GetResourcePolicy
uses an eventually consistent query, and the
metadata for your policy or table might not be available at that
moment. Wait for a few seconds, and then try the GetResourcePolicy
request again.
5107 5108 5109 5110 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 5107 def put_resource_policy(params = {}, = {}) req = build_request(:put_resource_policy, params) req.send_request() end |
#query(params = {}) ⇒ Types::QueryOutput
You must provide the name of the partition key attribute and a single
value for that attribute. Query
returns all items with that
partition key value. Optionally, you can provide a sort key attribute
and use a comparison operator to refine the search results.
Use the KeyConditionExpression
parameter to provide a specific value
for the partition key. The Query
operation will return all of the
items from the table or index with that partition key value. You can
optionally narrow the scope of the Query
operation by specifying a
sort key value and a comparison operator in KeyConditionExpression
.
To further refine the Query
results, you can optionally provide a
FilterExpression
. A FilterExpression
determines which items within
the results should be returned to you. All of the other results are
discarded.
A Query
operation always returns a result set. If no matching items
are found, the result set will be empty. Queries that do not return
results consume the minimum number of read capacity units for that
type of read operation.
FilterExpression
.
Query
results are always sorted by the sort key value. If the data
type of the sort key is Number, the results are returned in numeric
order; otherwise, the results are returned in order of UTF-8 bytes. By
default, the sort order is ascending. To reverse the order, set the
ScanIndexForward
parameter to false.
A single Query
operation will read up to the maximum number of items
set (if using the Limit
parameter) or a maximum of 1 MB of data and
then apply any filtering to the results using FilterExpression
. If
LastEvaluatedKey
is present in the response, you will need to
paginate the result set. For more information, see Paginating the
Results in the Amazon DynamoDB Developer Guide.
FilterExpression
is applied after a Query
finishes, but before the
results are returned. A FilterExpression
cannot contain partition
key or sort key attributes. You need to specify those attributes in
the KeyConditionExpression
.
Query
operation can return an empty result set and a
LastEvaluatedKey
if all the items read for the page of results are
filtered out.
You can query a table, a local secondary index, or a global secondary
index. For a query on a table or on a local secondary index, you can
set the ConsistentRead
parameter to true
and obtain a strongly
consistent result. Global secondary indexes support eventually
consistent reads only, so do not specify ConsistentRead
when
querying a global secondary index.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
5645 5646 5647 5648 |
# File 'gems/aws-sdk-dynamodb/lib/aws-sdk-dynamodb/client.rb', line 5645 def query(params = {}, = {}) req = build_request(:query, params) req.send_request( |