Data Ingestion - Amazon Timestream

Data Ingestion

  • Ensure that the timestamp of the incoming data is not later than data retention configured for the memory store and no earlier than the future ingestion period defined in Quotas. Sending data with a timestamp outside these bounds will result in the data being rejected by Timestream

  • While sending data to Timestream, batch multiple records in a single request to optimize data ingestion performance.

    • It is beneficial to batch together records from the same time series and records with the same measure name.

    • Batch as many records as possible in a single request as long as the requests are within the service limits defined in Quotas.

    • Use common attributes where possible to reduce data transfer and ingestion costs. Refer to WriteRecords API for more information

  • If you encounter partial client-side failures while writing data to Timestream, you can resend the batch of records that failed ingestion after you've addressed the rejection cause.

  • Data ordered by timestamps has better write performance

  • Amazon Timestream is designed to auto-scale to the needs of your application. When Timestream notices spikes in write requests from your application, your application may experience some level of initial throttling. If your application experiences this throttling, continue sending data to Timestream at the same (or increased) rate to enable Timestream to auto-scale to the satisfy the needs of your application