Amazon DynamoDB
Developer Guide (API Version 2012-08-10)

Usage Notes

Except for the endpoint, applications that run with DynamoDB on your system should also work with the Amazon DynamoDB web service. However, you should be aware of the following:

  • If you use the -sharedDb option, DynamoDB creates a single database file named shared-local-instance.db. Every program that connects to DynamoDB will access this file. If you delete the file, you will lose any data you have stored in it.

  • If you omit -sharedDb, the database file will be named myaccesskeyid_region.db, with the AWS access key ID and region as they appear in your application configuration. If you delete the file, you will lose any data you have stored in it.

  • If you use the -inMemory option, DynamoDB does not write any database files at all. Instead, all data is written to memory, and the data is not saved when you terminate DynamoDB.

  • If you use the -optimizeDbBeforeStartup option, you must also specify the -dbPath parameter so that DynamoDB will be able to find its database file.

  • The AWS SDKs for DynamoDB require that your application configuration specify an access key value and an AWS region value. Unless you are using the -sharedDb or the -inMemory option, DynamoDB uses these values to name the local database file.

    Although these do not have to be valid AWS values in order to run locally, you may find it useful to use valid values so that you can later run your code in the cloud simply by changing the endpoint you are using.

Differences Between DynamoDB Running Locally and the Amazon DynamoDB Web Service

The downloadable version of DynamoDB attempts to emulate the Amazon DynamoDB web service as closely as possible. However, it does differ from the Amazon DynamoDB service in the following ways:

  • Regions and distinct AWS accounts are not supported at the client level.

  • Provisioned throughput settings are ignored, even though the CreateTable operation requires them. For CreateTable, you can specify any numbers you want for provisioned read and write throughput, even though these numbers will not be used. You can call UpdateTable as many times as you like per day; however, any changes to provisioned throughput values are ignored.

  • Scan operations are performed sequentially. Parallel scans are not supported. The Segment and TotalSegments parameters of the Scan operation are ignored.

  • The speed of read and write operations on table data are limited only by the speed of your computer. CreateTable, UpdateTable and DeleteTable operations occur immediately, and table state is always ACTIVE. UpdateTable operations that only change the provisioned throughput settings on tables and/or global secondary indexes will occur immediately. If an UpdateTable operation creates or deletes any global secondary indexes, then those indexes transition through normal states (such as CREATING and DELETING, respectively) before they become ACTIVE state. The table remains ACTIVE during this time.

  • Read operations are eventually consistent. However, due to the speed of DynamoDB running on your computer, most reads will actually appear to be strongly consistent.

  • Consumed capacity units are not tracked. In operation responses, nulls are returned instead of capacity units.

  • Item collection metrics are not tracked; nor are item collection sizes. In operation responses, nulls are returned instead of item collection metrics.

  • In DynamoDB, there is a 1 MB limit on data returned per result set. The DynamoDB web service enforces this limit, and so does the downloadable version of DynamoDB. However, when querying an index, the DynamoDB service only calculates the size of the projected key and attributes. By contrast, the downloadable version of DynamoDB calculates the size of the entire item.

  • If you are leveraging DynamoDB Streams, the rate at which shards are created might differ: In the DynamoDB web service, shard creation behavior is partially influenced by table partition activity; however, when you run DynamoDB locally, there is no table partitioning. In either case, shards are ephemeral, so your application should not be dependent on shard behavior.