Amazon S3 examples using SDK for Rust
The following code examples show you how to perform actions and implement common scenarios by using the AWS SDK for Rust with Amazon S3.
Basics are code examples that show you how to perform the essential operations within a service.
Actions are code excerpts from larger programs and must be run in context. While actions show you how to call individual service functions, you can see actions in context in their related scenarios.
Scenarios are code examples that show you how to accomplish specific tasks by calling multiple functions within a service or combined with other AWS services.
Each example includes a link to the complete source code, where you can find instructions on how to set up and run the code in context.
Get started
The following code examples show how to get started using Amazon S3.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. /// S3 Hello World Example using the AWS SDK for Rust. /// /// This example lists the objects in a bucket, uploads an object to that bucket, /// and then retrieves the object and prints some S3 information about the object. /// This shows a number of S3 features, including how to use built-in paginators /// for large data sets. /// /// # Arguments /// /// * `client` - an S3 client configured appropriately for the environment. /// * `bucket` - the bucket name that the object will be uploaded to. Must be present in the region the `client` is configured to use. /// * `filename` - a reference to a path that will be read and uploaded to S3. /// * `key` - the string key that the object will be uploaded as inside the bucket. async fn list_bucket_and_upload_object( client: &aws_sdk_s3::Client, bucket: &str, filepath: &Path, key: &str, ) -> Result<(), S3ExampleError> { // List the buckets in this account let mut objects = client .list_objects_v2() .bucket(bucket) .into_paginator() .send(); println!("key\tetag\tlast_modified\tstorage_class"); while let Some(Ok(object)) = objects.next().await { for item in object.contents() { println!( "{}\t{}\t{}\t{}", item.key().unwrap_or_default(), item.e_tag().unwrap_or_default(), item.last_modified() .map(|lm| format!("{lm}")) .unwrap_or_default(), item.storage_class() .map(|sc| format!("{sc}")) .unwrap_or_default() ); } } // Prepare a ByteStream around the file, and upload the object using that ByteStream. let body = aws_sdk_s3::primitives::ByteStream::from_path(filepath) .await .map_err(|err| { S3ExampleError::new(format!( "Failed to create bytestream for {filepath:?} ({err:?})" )) })?; let resp = client .put_object() .bucket(bucket) .key(key) .body(body) .send() .await?; println!( "Upload success. Version: {:?}", resp.version_id() .expect("S3 Object upload missing version ID") ); // Retrieve the just-uploaded object. let resp = client.get_object().bucket(bucket).key(key).send().await?; println!("etag: {}", resp.e_tag().unwrap_or("(missing)")); println!("version: {}", resp.version_id().unwrap_or("(missing)")); Ok(()) }
S3ExampleError utilities.
/// S3ExampleError provides a From<T: ProvideErrorMetadata> impl to extract /// client-specific error details. This serves as a consistent backup to handling /// specific service errors, depending on what is needed by the scenario. /// It is used throughout the code examples for the AWS SDK for Rust. #[derive(Debug)] pub struct S3ExampleError(String); impl S3ExampleError { pub fn new(value: impl Into<String>) -> Self { S3ExampleError(value.into()) } pub fn add_message(self, message: impl Into<String>) -> Self { S3ExampleError(format!("{}: {}", message.into(), self.0)) } } impl<T: aws_sdk_s3::error::ProvideErrorMetadata> From<T> for S3ExampleError { fn from(value: T) -> Self { S3ExampleError(format!( "{}: {}", value .code() .map(String::from) .unwrap_or("unknown code".into()), value .message() .map(String::from) .unwrap_or("missing reason".into()), )) } } impl std::error::Error for S3ExampleError {} impl std::fmt::Display for S3ExampleError { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!(f, "{}", self.0) } }
-
For API details, see ListBuckets
in AWS SDK for Rust API reference.
-
Basics
The following code example shows how to:
Create a bucket and upload a file to it.
Download an object from a bucket.
Copy an object to a subfolder in a bucket.
List the objects in a bucket.
Delete the bucket objects and the bucket.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. Code for the binary crate which runs the scenario.
#![allow(clippy::result_large_err)] //! Purpose //! Shows how to use the AWS SDK for Rust to get started using //! Amazon Simple Storage Service (Amazon S3). Create a bucket, move objects into and out of it, //! and delete all resources at the end of the demo. //! //! This example follows the steps in "Getting started with Amazon S3" in the Amazon S3 //! user guide. //! - https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html use aws_config::meta::region::RegionProviderChain; use aws_sdk_s3::{config::Region, Client}; use s3_code_examples::error::S3ExampleError; use uuid::Uuid; #[tokio::main] async fn main() -> Result<(), S3ExampleError> { let region_provider = RegionProviderChain::first_try(Region::new("us-west-2")); let region = region_provider.region().await.unwrap(); let shared_config = aws_config::from_env().region(region_provider).load().await; let client = Client::new(&shared_config); let bucket_name = format!("amzn-s3-demo-bucket-{}", Uuid::new_v4()); let file_name = "s3/testfile.txt".to_string(); let key = "test file key name".to_string(); let target_key = "target_key".to_string(); if let Err(e) = run_s3_operations(region, client, bucket_name, file_name, key, target_key).await { eprintln!("{:?}", e); }; Ok(()) } async fn run_s3_operations( region: Region, client: Client, bucket_name: String, file_name: String, key: String, target_key: String, ) -> Result<(), S3ExampleError> { s3_code_examples::create_bucket(&client, &bucket_name, ®ion).await?; let run_example: Result<(), S3ExampleError> = (async { s3_code_examples::upload_object(&client, &bucket_name, &file_name, &key).await?; let _object = s3_code_examples::download_object(&client, &bucket_name, &key).await; s3_code_examples::copy_object(&client, &bucket_name, &bucket_name, &key, &target_key) .await?; s3_code_examples::list_objects(&client, &bucket_name).await?; s3_code_examples::clear_bucket(&client, &bucket_name).await?; Ok(()) }) .await; if let Err(err) = run_example { eprintln!("Failed to complete getting-started example: {err:?}"); } s3_code_examples::delete_bucket(&client, &bucket_name).await?; Ok(()) }
Common actions used by the scenario.
pub async fn create_bucket( client: &aws_sdk_s3::Client, bucket_name: &str, region: &aws_config::Region, ) -> Result<Option<aws_sdk_s3::operation::create_bucket::CreateBucketOutput>, S3ExampleError> { let constraint = aws_sdk_s3::types::BucketLocationConstraint::from(region.to_string().as_str()); let cfg = aws_sdk_s3::types::CreateBucketConfiguration::builder() .location_constraint(constraint) .build(); let create = client .create_bucket() .create_bucket_configuration(cfg) .bucket(bucket_name) .send() .await; // BucketAlreadyExists and BucketAlreadyOwnedByYou are not problems for this task. create.map(Some).or_else(|err| { if err .as_service_error() .map(|se| se.is_bucket_already_exists() || se.is_bucket_already_owned_by_you()) == Some(true) { Ok(None) } else { Err(S3ExampleError::from(err)) } }) } pub async fn upload_object( client: &aws_sdk_s3::Client, bucket_name: &str, file_name: &str, key: &str, ) -> Result<aws_sdk_s3::operation::put_object::PutObjectOutput, S3ExampleError> { let body = aws_sdk_s3::primitives::ByteStream::from_path(std::path::Path::new(file_name)).await; client .put_object() .bucket(bucket_name) .key(key) .body(body.unwrap()) .send() .await .map_err(S3ExampleError::from) } pub async fn download_object( client: &aws_sdk_s3::Client, bucket_name: &str, key: &str, ) -> Result<aws_sdk_s3::operation::get_object::GetObjectOutput, S3ExampleError> { client .get_object() .bucket(bucket_name) .key(key) .send() .await .map_err(S3ExampleError::from) } /// Copy an object from one bucket to another. pub async fn copy_object( client: &aws_sdk_s3::Client, source_bucket: &str, destination_bucket: &str, source_object: &str, destination_object: &str, ) -> Result<(), S3ExampleError> { let source_key = format!("{source_bucket}/{source_object}"); let response = client .copy_object() .copy_source(&source_key) .bucket(destination_bucket) .key(destination_object) .send() .await?; println!( "Copied from {source_key} to {destination_bucket}/{destination_object} with etag {}", response .copy_object_result .unwrap_or_else(|| aws_sdk_s3::types::CopyObjectResult::builder().build()) .e_tag() .unwrap_or("missing") ); Ok(()) } pub async fn list_objects(client: &aws_sdk_s3::Client, bucket: &str) -> Result<(), S3ExampleError> { let mut response = client .list_objects_v2() .bucket(bucket.to_owned()) .max_keys(10) // In this example, go 10 at a time. .into_paginator() .send(); while let Some(result) = response.next().await { match result { Ok(output) => { for object in output.contents() { println!(" - {}", object.key().unwrap_or("Unknown")); } } Err(err) => { eprintln!("{err:?}") } } } Ok(()) } /// Given a bucket, remove all objects in the bucket, and then ensure no objects /// remain in the bucket. pub async fn clear_bucket( client: &aws_sdk_s3::Client, bucket_name: &str, ) -> Result<Vec<String>, S3ExampleError> { let objects = client.list_objects_v2().bucket(bucket_name).send().await?; // delete_objects no longer needs to be mutable. let objects_to_delete: Vec<String> = objects .contents() .iter() .filter_map(|obj| obj.key()) .map(String::from) .collect(); if objects_to_delete.is_empty() { return Ok(vec![]); } let return_keys = objects_to_delete.clone(); delete_objects(client, bucket_name, objects_to_delete).await?; let objects = client.list_objects_v2().bucket(bucket_name).send().await?; eprintln!("{objects:?}"); match objects.key_count { Some(0) => Ok(return_keys), _ => Err(S3ExampleError::new( "There were still objects left in the bucket.", )), } } pub async fn delete_bucket( client: &aws_sdk_s3::Client, bucket_name: &str, ) -> Result<(), S3ExampleError> { let resp = client.delete_bucket().bucket(bucket_name).send().await; match resp { Ok(_) => Ok(()), Err(err) => { if err .as_service_error() .and_then(aws_sdk_s3::error::ProvideErrorMetadata::code) == Some("NoSuchBucket") { Ok(()) } else { Err(S3ExampleError::from(err)) } } } }
-
For API details, see the following topics in AWS SDK for Rust API reference.
-
Actions
The following code example shows how to use CompleteMultipartUpload
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. // upload_parts: Vec<aws_sdk_s3::types::CompletedPart> let completed_multipart_upload: CompletedMultipartUpload = CompletedMultipartUpload::builder() .set_parts(Some(upload_parts)) .build(); let _complete_multipart_upload_res = client .complete_multipart_upload() .bucket(&bucket_name) .key(&key) .multipart_upload(completed_multipart_upload) .upload_id(upload_id) .send() .await?;
// Create a multipart upload. Use UploadPart and CompleteMultipartUpload to // upload the file. let multipart_upload_res: CreateMultipartUploadOutput = client .create_multipart_upload() .bucket(&bucket_name) .key(&key) .send() .await?; let upload_id = multipart_upload_res.upload_id().ok_or(S3ExampleError::new( "Missing upload_id after CreateMultipartUpload", ))?;
let mut upload_parts: Vec<aws_sdk_s3::types::CompletedPart> = Vec::new(); for chunk_index in 0..chunk_count { let this_chunk = if chunk_count - 1 == chunk_index { size_of_last_chunk } else { CHUNK_SIZE }; let stream = ByteStream::read_from() .path(path) .offset(chunk_index * CHUNK_SIZE) .length(Length::Exact(this_chunk)) .build() .await .unwrap(); // Chunk index needs to start at 0, but part numbers start at 1. let part_number = (chunk_index as i32) + 1; let upload_part_res = client .upload_part() .key(&key) .bucket(&bucket_name) .upload_id(upload_id) .body(stream) .part_number(part_number) .send() .await?; upload_parts.push( CompletedPart::builder() .e_tag(upload_part_res.e_tag.unwrap_or_default()) .part_number(part_number) .build(), ); }
-
For API details, see CompleteMultipartUpload
in AWS SDK for Rust API reference.
-
The following code example shows how to use CopyObject
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. /// Copy an object from one bucket to another. pub async fn copy_object( client: &aws_sdk_s3::Client, source_bucket: &str, destination_bucket: &str, source_object: &str, destination_object: &str, ) -> Result<(), S3ExampleError> { let source_key = format!("{source_bucket}/{source_object}"); let response = client .copy_object() .copy_source(&source_key) .bucket(destination_bucket) .key(destination_object) .send() .await?; println!( "Copied from {source_key} to {destination_bucket}/{destination_object} with etag {}", response .copy_object_result .unwrap_or_else(|| aws_sdk_s3::types::CopyObjectResult::builder().build()) .e_tag() .unwrap_or("missing") ); Ok(()) }
-
For API details, see CopyObject
in AWS SDK for Rust API reference.
-
The following code example shows how to use CreateBucket
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. pub async fn create_bucket( client: &aws_sdk_s3::Client, bucket_name: &str, region: &aws_config::Region, ) -> Result<Option<aws_sdk_s3::operation::create_bucket::CreateBucketOutput>, S3ExampleError> { let constraint = aws_sdk_s3::types::BucketLocationConstraint::from(region.to_string().as_str()); let cfg = aws_sdk_s3::types::CreateBucketConfiguration::builder() .location_constraint(constraint) .build(); let create = client .create_bucket() .create_bucket_configuration(cfg) .bucket(bucket_name) .send() .await; // BucketAlreadyExists and BucketAlreadyOwnedByYou are not problems for this task. create.map(Some).or_else(|err| { if err .as_service_error() .map(|se| se.is_bucket_already_exists() || se.is_bucket_already_owned_by_you()) == Some(true) { Ok(None) } else { Err(S3ExampleError::from(err)) } }) }
-
For API details, see CreateBucket
in AWS SDK for Rust API reference.
-
The following code example shows how to use CreateMultipartUpload
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. // Create a multipart upload. Use UploadPart and CompleteMultipartUpload to // upload the file. let multipart_upload_res: CreateMultipartUploadOutput = client .create_multipart_upload() .bucket(&bucket_name) .key(&key) .send() .await?; let upload_id = multipart_upload_res.upload_id().ok_or(S3ExampleError::new( "Missing upload_id after CreateMultipartUpload", ))?;
let mut upload_parts: Vec<aws_sdk_s3::types::CompletedPart> = Vec::new(); for chunk_index in 0..chunk_count { let this_chunk = if chunk_count - 1 == chunk_index { size_of_last_chunk } else { CHUNK_SIZE }; let stream = ByteStream::read_from() .path(path) .offset(chunk_index * CHUNK_SIZE) .length(Length::Exact(this_chunk)) .build() .await .unwrap(); // Chunk index needs to start at 0, but part numbers start at 1. let part_number = (chunk_index as i32) + 1; let upload_part_res = client .upload_part() .key(&key) .bucket(&bucket_name) .upload_id(upload_id) .body(stream) .part_number(part_number) .send() .await?; upload_parts.push( CompletedPart::builder() .e_tag(upload_part_res.e_tag.unwrap_or_default()) .part_number(part_number) .build(), ); }
// upload_parts: Vec<aws_sdk_s3::types::CompletedPart> let completed_multipart_upload: CompletedMultipartUpload = CompletedMultipartUpload::builder() .set_parts(Some(upload_parts)) .build(); let _complete_multipart_upload_res = client .complete_multipart_upload() .bucket(&bucket_name) .key(&key) .multipart_upload(completed_multipart_upload) .upload_id(upload_id) .send() .await?;
-
For API details, see CreateMultipartUpload
in AWS SDK for Rust API reference.
-
The following code example shows how to use DeleteBucket
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. pub async fn delete_bucket( client: &aws_sdk_s3::Client, bucket_name: &str, ) -> Result<(), S3ExampleError> { let resp = client.delete_bucket().bucket(bucket_name).send().await; match resp { Ok(_) => Ok(()), Err(err) => { if err .as_service_error() .and_then(aws_sdk_s3::error::ProvideErrorMetadata::code) == Some("NoSuchBucket") { Ok(()) } else { Err(S3ExampleError::from(err)) } } } }
-
For API details, see DeleteBucket
in AWS SDK for Rust API reference.
-
The following code example shows how to use DeleteObject
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. /// Delete an object from a bucket. pub async fn remove_object( client: &aws_sdk_s3::Client, bucket: &str, key: &str, ) -> Result<(), S3ExampleError> { client .delete_object() .bucket(bucket) .key(key) .send() .await?; // There are no modeled errors to handle when deleting an object. Ok(()) }
-
For API details, see DeleteObject
in AWS SDK for Rust API reference.
-
The following code example shows how to use DeleteObjects
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. /// Delete the objects in a bucket. pub async fn delete_objects( client: &aws_sdk_s3::Client, bucket_name: &str, objects_to_delete: Vec<String>, ) -> Result<(), S3ExampleError> { // Push into a mut vector to use `?` early return errors while building object keys. let mut delete_object_ids: Vec<aws_sdk_s3::types::ObjectIdentifier> = vec![]; for obj in objects_to_delete { let obj_id = aws_sdk_s3::types::ObjectIdentifier::builder() .key(obj) .build() .map_err(|err| { S3ExampleError::new(format!("Failed to build key for delete_object: {err:?}")) })?; delete_object_ids.push(obj_id); } client .delete_objects() .bucket(bucket_name) .delete( aws_sdk_s3::types::Delete::builder() .set_objects(Some(delete_object_ids)) .build() .map_err(|err| { S3ExampleError::new(format!("Failed to build delete_object input {err:?}")) })?, ) .send() .await?; Ok(()) }
-
For API details, see DeleteObjects
in AWS SDK for Rust API reference.
-
The following code example shows how to use GetBucketLocation
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. async fn show_buckets( strict: bool, client: &Client, region: BucketLocationConstraint, ) -> Result<(), S3ExampleError> { let mut buckets = client.list_buckets().into_paginator().send(); let mut num_buckets = 0; let mut in_region = 0; while let Some(Ok(output)) = buckets.next().await { for bucket in output.buckets() { num_buckets += 1; if strict { let r = client .get_bucket_location() .bucket(bucket.name().unwrap_or_default()) .send() .await?; if r.location_constraint() == Some(®ion) { println!("{}", bucket.name().unwrap_or_default()); in_region += 1; } } else { println!("{}", bucket.name().unwrap_or_default()); } } } println!(); if strict { println!( "Found {} buckets in the {} region out of a total of {} buckets.", in_region, region, num_buckets ); } else { println!("Found {} buckets in all regions.", num_buckets); } Ok(()) }
-
For API details, see GetBucketLocation
in AWS SDK for Rust API reference.
-
The following code example shows how to use GetObject
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. async fn get_object(client: Client, opt: Opt) -> Result<usize, S3ExampleError> { trace!("bucket: {}", opt.bucket); trace!("object: {}", opt.object); trace!("destination: {}", opt.destination.display()); let mut file = File::create(opt.destination.clone()).map_err(|err| { S3ExampleError::new(format!( "Failed to initialize file for saving S3 download: {err:?}" )) })?; let mut object = client .get_object() .bucket(opt.bucket) .key(opt.object) .send() .await?; let mut byte_count = 0_usize; while let Some(bytes) = object.body.try_next().await.map_err(|err| { S3ExampleError::new(format!("Failed to read from S3 download stream: {err:?}")) })? { let bytes_len = bytes.len(); file.write_all(&bytes).map_err(|err| { S3ExampleError::new(format!( "Failed to write from S3 download stream to local file: {err:?}" )) })?; trace!("Intermediate write of {bytes_len}"); byte_count += bytes_len; } Ok(byte_count) }
-
For API details, see GetObject
in AWS SDK for Rust API reference.
-
The following code example shows how to use ListBuckets
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. async fn show_buckets( strict: bool, client: &Client, region: BucketLocationConstraint, ) -> Result<(), S3ExampleError> { let mut buckets = client.list_buckets().into_paginator().send(); let mut num_buckets = 0; let mut in_region = 0; while let Some(Ok(output)) = buckets.next().await { for bucket in output.buckets() { num_buckets += 1; if strict { let r = client .get_bucket_location() .bucket(bucket.name().unwrap_or_default()) .send() .await?; if r.location_constraint() == Some(®ion) { println!("{}", bucket.name().unwrap_or_default()); in_region += 1; } } else { println!("{}", bucket.name().unwrap_or_default()); } } } println!(); if strict { println!( "Found {} buckets in the {} region out of a total of {} buckets.", in_region, region, num_buckets ); } else { println!("Found {} buckets in all regions.", num_buckets); } Ok(()) }
-
For API details, see ListBuckets
in AWS SDK for Rust API reference.
-
The following code example shows how to use ListObjectVersions
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. async fn show_versions(client: &Client, bucket: &str) -> Result<(), Error> { let resp = client.list_object_versions().bucket(bucket).send().await?; for version in resp.versions() { println!("{}", version.key().unwrap_or_default()); println!(" version ID: {}", version.version_id().unwrap_or_default()); println!(); } Ok(()) }
-
For API details, see ListObjectVersions
in AWS SDK for Rust API reference.
-
The following code example shows how to use ListObjectsV2
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. pub async fn list_objects(client: &aws_sdk_s3::Client, bucket: &str) -> Result<(), S3ExampleError> { let mut response = client .list_objects_v2() .bucket(bucket.to_owned()) .max_keys(10) // In this example, go 10 at a time. .into_paginator() .send(); while let Some(result) = response.next().await { match result { Ok(output) => { for object in output.contents() { println!(" - {}", object.key().unwrap_or("Unknown")); } } Err(err) => { eprintln!("{err:?}") } } } Ok(()) }
-
For API details, see ListObjectsV2
in AWS SDK for Rust API reference.
-
The following code example shows how to use PutObject
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. pub async fn upload_object( client: &aws_sdk_s3::Client, bucket_name: &str, file_name: &str, key: &str, ) -> Result<aws_sdk_s3::operation::put_object::PutObjectOutput, S3ExampleError> { let body = aws_sdk_s3::primitives::ByteStream::from_path(std::path::Path::new(file_name)).await; client .put_object() .bucket(bucket_name) .key(key) .body(body.unwrap()) .send() .await .map_err(S3ExampleError::from) }
-
For API details, see PutObject
in AWS SDK for Rust API reference.
-
The following code example shows how to use UploadPart
.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. let mut upload_parts: Vec<aws_sdk_s3::types::CompletedPart> = Vec::new(); for chunk_index in 0..chunk_count { let this_chunk = if chunk_count - 1 == chunk_index { size_of_last_chunk } else { CHUNK_SIZE }; let stream = ByteStream::read_from() .path(path) .offset(chunk_index * CHUNK_SIZE) .length(Length::Exact(this_chunk)) .build() .await .unwrap(); // Chunk index needs to start at 0, but part numbers start at 1. let part_number = (chunk_index as i32) + 1; let upload_part_res = client .upload_part() .key(&key) .bucket(&bucket_name) .upload_id(upload_id) .body(stream) .part_number(part_number) .send() .await?; upload_parts.push( CompletedPart::builder() .e_tag(upload_part_res.e_tag.unwrap_or_default()) .part_number(part_number) .build(), ); }
// Create a multipart upload. Use UploadPart and CompleteMultipartUpload to // upload the file. let multipart_upload_res: CreateMultipartUploadOutput = client .create_multipart_upload() .bucket(&bucket_name) .key(&key) .send() .await?; let upload_id = multipart_upload_res.upload_id().ok_or(S3ExampleError::new( "Missing upload_id after CreateMultipartUpload", ))?;
// upload_parts: Vec<aws_sdk_s3::types::CompletedPart> let completed_multipart_upload: CompletedMultipartUpload = CompletedMultipartUpload::builder() .set_parts(Some(upload_parts)) .build(); let _complete_multipart_upload_res = client .complete_multipart_upload() .bucket(&bucket_name) .key(&key) .multipart_upload(completed_multipart_upload) .upload_id(upload_id) .send() .await?;
-
For API details, see UploadPart
in AWS SDK for Rust API reference.
-
Scenarios
The following code example shows how to:
Use Amazon Polly to synthesize a plain text (UTF-8) input file to an audio file.
Upload the audio file to an Amazon S3 bucket.
Use Amazon Transcribe to convert the audio file to text.
Display the text.
- SDK for Rust
-
Use Amazon Polly to synthesize a plain text (UTF-8) input file to an audio file, upload the audio file to an Amazon S3 bucket, use Amazon Transcribe to convert that audio file to text, and display the text.
For complete source code and instructions on how to set up and run, see the full example on GitHub
. Services used in this example
Amazon Polly
Amazon S3
Amazon Transcribe
The following code example shows how to create a presigned URL for Amazon S3 and upload an object.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. Create presigning requests to GET S3 objects.
/// Generate a URL for a presigned GET request. async fn get_object( client: &Client, bucket: &str, object: &str, expires_in: u64, ) -> Result<(), Box<dyn Error>> { let expires_in = Duration::from_secs(expires_in); let presigned_request = client .get_object() .bucket(bucket) .key(object) .presigned(PresigningConfig::expires_in(expires_in)?) .await?; println!("Object URI: {}", presigned_request.uri()); let valid_until = chrono::offset::Local::now() + expires_in; println!("Valid until: {valid_until}"); Ok(()) }
Create presigning requests to PUT S3 objects.
async fn put_object( client: &Client, bucket: &str, object: &str, expires_in: u64, ) -> Result<String, S3ExampleError> { let expires_in: std::time::Duration = std::time::Duration::from_secs(expires_in); let expires_in: aws_sdk_s3::presigning::PresigningConfig = PresigningConfig::expires_in(expires_in).map_err(|err| { S3ExampleError::new(format!( "Failed to convert expiration to PresigningConfig: {err:?}" )) })?; let presigned_request = client .put_object() .bucket(bucket) .key(object) .presigned(expires_in) .await?; Ok(presigned_request.uri().into()) }
The following code example shows how to create a serverless application that lets users manage photos using labels.
- SDK for Rust
-
Shows how to develop a photo asset management application that detects labels in images using Amazon Rekognition and stores them for later retrieval.
For complete source code and instructions on how to set up and run, see the full example on GitHub
. For a deep dive into the origin of this example see the post on AWS Community
. Services used in this example
API Gateway
DynamoDB
Lambda
Amazon Rekognition
Amazon S3
Amazon SNS
The following code example shows how to:
Save an image in an Amazon S3 bucket.
Use Amazon Rekognition to detect facial details, such as age range, gender, and emotion (such as smiling).
Display those details.
- SDK for Rust
-
Save the image in an Amazon S3 bucket with an uploads prefix, use Amazon Rekognition to detect facial details, such as age range, gender, and emotion (smiling, etc.), and display those details.
For complete source code and instructions on how to set up and run, see the full example on GitHub
. Services used in this example
Amazon Rekognition
Amazon S3
The following code example shows how to read data from an object in an S3 bucket, but only if that bucket has not been modified since the last retrieval time.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. use aws_sdk_s3::{ error::SdkError, primitives::{ByteStream, DateTime, DateTimeFormat}, Client, }; use s3_code_examples::error::S3ExampleError; use tracing::{error, warn}; const KEY: &str = "key"; const BODY: &str = "Hello, world!"; /// Demonstrate how `if-modified-since` reports that matching objects haven't /// changed. /// /// # Steps /// - Create a bucket. /// - Put an object in the bucket. /// - Get the bucket headers. /// - Get the bucket headers again but only if modified. /// - Delete the bucket. #[tokio::main] async fn main() -> Result<(), S3ExampleError> { tracing_subscriber::fmt::init(); // Get a new UUID to use when creating a unique bucket name. let uuid = uuid::Uuid::new_v4(); // Load the AWS configuration from the environment. let client = Client::new(&aws_config::load_from_env().await); // Generate a unique bucket name using the previously generated UUID. // Then create a new bucket with that name. let bucket_name = format!("if-modified-since-{uuid}"); client .create_bucket() .bucket(bucket_name.clone()) .send() .await?; // Create a new object in the bucket whose name is `KEY` and whose // contents are `BODY`. let put_object_output = client .put_object() .bucket(bucket_name.as_str()) .key(KEY) .body(ByteStream::from_static(BODY.as_bytes())) .send() .await; // If the `PutObject` succeeded, get the eTag string from it. Otherwise, // report an error and return an empty string. let e_tag_1 = match put_object_output { Ok(put_object) => put_object.e_tag.unwrap(), Err(err) => { error!("{err:?}"); String::new() } }; // Request the object's headers. let head_object_output = client .head_object() .bucket(bucket_name.as_str()) .key(KEY) .send() .await; // If the `HeadObject` request succeeded, create a tuple containing the // values of the headers `last-modified` and `etag`. If the request // failed, return the error in a tuple instead. let (last_modified, e_tag_2) = match head_object_output { Ok(head_object) => ( Ok(head_object.last_modified().cloned().unwrap()), head_object.e_tag.unwrap(), ), Err(err) => (Err(err), String::new()), }; warn!("last modified: {last_modified:?}"); assert_eq!( e_tag_1, e_tag_2, "PutObject and first GetObject had differing eTags" ); println!("First value of last_modified: {last_modified:?}"); println!("First tag: {}\n", e_tag_1); // Send a second `HeadObject` request. This time, the `if_modified_since` // option is specified, giving the `last_modified` value returned by the // first call to `HeadObject`. // // Since the object hasn't been changed, and there are no other objects in // the bucket, there should be no matching objects. let head_object_output = client .head_object() .bucket(bucket_name.as_str()) .key(KEY) .if_modified_since(last_modified.unwrap()) .send() .await; // If the `HeadObject` request succeeded, the result is a typle containing // the `last_modified` and `e_tag_1` properties. This is _not_ the expected // result. // // The _expected_ result of the second call to `HeadObject` is an // `SdkError::ServiceError` containing the HTTP error response. If that's // the case and the HTTP status is 304 (not modified), the output is a // tuple containing the values of the HTTP `last-modified` and `etag` // headers. // // If any other HTTP error occurred, the error is returned as an // `SdkError::ServiceError`. let (last_modified, e_tag_2) = match head_object_output { Ok(head_object) => ( Ok(head_object.last_modified().cloned().unwrap()), head_object.e_tag.unwrap(), ), Err(err) => match err { SdkError::ServiceError(err) => { // Get the raw HTTP response. If its status is 304, the // object has not changed. This is the expected code path. let http = err.raw(); match http.status().as_u16() { // If the HTTP status is 304: Not Modified, return a // tuple containing the values of the HTTP // `last-modified` and `etag` headers. 304 => ( Ok(DateTime::from_str( http.headers().get("last-modified").unwrap(), DateTimeFormat::HttpDate, ) .unwrap()), http.headers().get("etag").map(|t| t.into()).unwrap(), ), // Any other HTTP status code is returned as an // `SdkError::ServiceError`. _ => (Err(SdkError::ServiceError(err)), String::new()), } } // Any other kind of error is returned in a tuple containing the // error and an empty string. _ => (Err(err), String::new()), }, }; warn!("last modified: {last_modified:?}"); assert_eq!( e_tag_1, e_tag_2, "PutObject and second HeadObject had different eTags" ); println!("Second value of last modified: {last_modified:?}"); println!("Second tag: {}", e_tag_2); // Clean up by deleting the object and the bucket. client .delete_object() .bucket(bucket_name.as_str()) .key(KEY) .send() .await?; client .delete_bucket() .bucket(bucket_name.as_str()) .send() .await?; Ok(()) }
-
For API details, see GetObject
in AWS SDK for Rust API reference.
-
The following code example shows how to:
Get EXIF information from a a JPG, JPEG, or PNG file.
Upload the image file to an Amazon S3 bucket.
Use Amazon Rekognition to identify the three top attributes (labels) in the file.
Add the EXIF and label information to an Amazon DynamoDB table in the Region.
- SDK for Rust
-
Get EXIF information from a JPG, JPEG, or PNG file, upload the image file to an Amazon S3 bucket, use Amazon Rekognition to identify the three top attributes (labels in Amazon Rekognition) in the file, and add the EXIF and label information to a Amazon DynamoDB table in the Region.
For complete source code and instructions on how to set up and run, see the full example on GitHub
. Services used in this example
DynamoDB
Amazon Rekognition
Amazon S3
The following code example shows how to examples for best-practice techniques when writing unit and integration tests using an AWS SDK.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. Cargo.toml for testing examples.
[package] name = "testing-examples" version = "0.1.0" authors = [ "John Disanti <jdisanti@amazon.com>", "Doug Schwartz <dougsch@amazon.com>", ] edition = "2021" [dependencies] async-trait = "0.1.51" aws-config = { version = "1.0.1", features = ["behavior-version-latest"] } aws-credential-types = { version = "1.0.1", features = [ "hardcoded-credentials", ] } aws-sdk-s3 = { version = "1.4.0" } aws-smithy-types = { version = "1.0.1" } aws-smithy-runtime = { version = "1.0.1", features = ["test-util"] } aws-smithy-runtime-api = { version = "1.0.1", features = ["test-util"] } aws-types = { version = "1.0.1" } clap = { version = "4.4", features = ["derive"] } http = "0.2.9" mockall = "0.11.4" serde_json = "1" tokio = { version = "1.20.1", features = ["full"] } tracing-subscriber = { version = "0.3.15", features = ["env-filter"] } [[bin]] name = "main" path = "src/main.rs"
Unit testing example using automock and a service wrapper.
use aws_sdk_s3 as s3; #[allow(unused_imports)] use mockall::automock; use s3::operation::list_objects_v2::{ListObjectsV2Error, ListObjectsV2Output}; #[cfg(test)] pub use MockS3Impl as S3; #[cfg(not(test))] pub use S3Impl as S3; #[allow(dead_code)] pub struct S3Impl { inner: s3::Client, } #[cfg_attr(test, automock)] impl S3Impl { #[allow(dead_code)] pub fn new(inner: s3::Client) -> Self { Self { inner } } #[allow(dead_code)] pub async fn list_objects( &self, bucket: &str, prefix: &str, continuation_token: Option<String>, ) -> Result<ListObjectsV2Output, s3::error::SdkError<ListObjectsV2Error>> { self.inner .list_objects_v2() .bucket(bucket) .prefix(prefix) .set_continuation_token(continuation_token) .send() .await } } #[allow(dead_code)] pub async fn determine_prefix_file_size( // Now we take a reference to our trait object instead of the S3 client // s3_list: ListObjectsService, s3_list: S3, bucket: &str, prefix: &str, ) -> Result<usize, s3::Error> { let mut next_token: Option<String> = None; let mut total_size_bytes = 0; loop { let result = s3_list .list_objects(bucket, prefix, next_token.take()) .await?; // Add up the file sizes we got back for object in result.contents() { total_size_bytes += object.size().unwrap_or(0) as usize; } // Handle pagination, and break the loop if there are no more pages next_token = result.next_continuation_token.clone(); if next_token.is_none() { break; } } Ok(total_size_bytes) } #[cfg(test)] mod test { use super::*; use mockall::predicate::eq; #[tokio::test] async fn test_single_page() { let mut mock = MockS3Impl::default(); mock.expect_list_objects() .with(eq("test-bucket"), eq("test-prefix"), eq(None)) .return_once(|_, _, _| { Ok(ListObjectsV2Output::builder() .set_contents(Some(vec![ // Mock content for ListObjectsV2 response s3::types::Object::builder().size(5).build(), s3::types::Object::builder().size(2).build(), ])) .build()) }); // Run the code we want to test with it let size = determine_prefix_file_size(mock, "test-bucket", "test-prefix") .await .unwrap(); // Verify we got the correct total size back assert_eq!(7, size); } #[tokio::test] async fn test_multiple_pages() { // Create the Mock instance with two pages of objects now let mut mock = MockS3Impl::default(); mock.expect_list_objects() .with(eq("test-bucket"), eq("test-prefix"), eq(None)) .return_once(|_, _, _| { Ok(ListObjectsV2Output::builder() .set_contents(Some(vec![ // Mock content for ListObjectsV2 response s3::types::Object::builder().size(5).build(), s3::types::Object::builder().size(2).build(), ])) .set_next_continuation_token(Some("next".to_string())) .build()) }); mock.expect_list_objects() .with( eq("test-bucket"), eq("test-prefix"), eq(Some("next".to_string())), ) .return_once(|_, _, _| { Ok(ListObjectsV2Output::builder() .set_contents(Some(vec![ // Mock content for ListObjectsV2 response s3::types::Object::builder().size(3).build(), s3::types::Object::builder().size(9).build(), ])) .build()) }); // Run the code we want to test with it let size = determine_prefix_file_size(mock, "test-bucket", "test-prefix") .await .unwrap(); assert_eq!(19, size); } }
Integration testing example using StaticReplayClient.
use aws_sdk_s3 as s3; #[allow(dead_code)] pub async fn determine_prefix_file_size( // Now we take a reference to our trait object instead of the S3 client // s3_list: ListObjectsService, s3: s3::Client, bucket: &str, prefix: &str, ) -> Result<usize, s3::Error> { let mut next_token: Option<String> = None; let mut total_size_bytes = 0; loop { let result = s3 .list_objects_v2() .prefix(prefix) .bucket(bucket) .set_continuation_token(next_token.take()) .send() .await?; // Add up the file sizes we got back for object in result.contents() { total_size_bytes += object.size().unwrap_or(0) as usize; } // Handle pagination, and break the loop if there are no more pages next_token = result.next_continuation_token.clone(); if next_token.is_none() { break; } } Ok(total_size_bytes) } #[allow(dead_code)] fn make_s3_test_credentials() -> s3::config::Credentials { s3::config::Credentials::new( "ATESTCLIENT", "astestsecretkey", Some("atestsessiontoken".to_string()), None, "", ) } #[cfg(test)] mod test { use super::*; use aws_config::BehaviorVersion; use aws_sdk_s3 as s3; use aws_smithy_runtime::client::http::test_util::{ReplayEvent, StaticReplayClient}; use aws_smithy_types::body::SdkBody; #[tokio::test] async fn test_single_page() { let page_1 = ReplayEvent::new( http::Request::builder() .method("GET") .uri("https://test-bucket.s3.us-east-1.amazonaws.com/?list-type=2&prefix=test-prefix") .body(SdkBody::empty()) .unwrap(), http::Response::builder() .status(200) .body(SdkBody::from(include_str!("./testing/response_1.xml"))) .unwrap(), ); let replay_client = StaticReplayClient::new(vec![page_1]); let client: s3::Client = s3::Client::from_conf( s3::Config::builder() .behavior_version(BehaviorVersion::latest()) .credentials_provider(make_s3_test_credentials()) .region(s3::config::Region::new("us-east-1")) .http_client(replay_client.clone()) .build(), ); // Run the code we want to test with it let size = determine_prefix_file_size(client, "test-bucket", "test-prefix") .await .unwrap(); // Verify we got the correct total size back assert_eq!(7, size); replay_client.assert_requests_match(&[]); } #[tokio::test] async fn test_multiple_pages() { let page_1 = ReplayEvent::new( http::Request::builder() .method("GET") .uri("https://test-bucket.s3.us-east-1.amazonaws.com/?list-type=2&prefix=test-prefix") .body(SdkBody::empty()) .unwrap(), http::Response::builder() .status(200) .body(SdkBody::from(include_str!("./testing/response_multi_1.xml"))) .unwrap(), ); let page_2 = ReplayEvent::new( http::Request::builder() .method("GET") .uri("https://test-bucket.s3.us-east-1.amazonaws.com/?list-type=2&prefix=test-prefix&continuation-token=next") .body(SdkBody::empty()) .unwrap(), http::Response::builder() .status(200) .body(SdkBody::from(include_str!("./testing/response_multi_2.xml"))) .unwrap(), ); let replay_client = StaticReplayClient::new(vec![page_1, page_2]); let client: s3::Client = s3::Client::from_conf( s3::Config::builder() .behavior_version(BehaviorVersion::latest()) .credentials_provider(make_s3_test_credentials()) .region(s3::config::Region::new("us-east-1")) .http_client(replay_client.clone()) .build(), ); // Run the code we want to test with it let size = determine_prefix_file_size(client, "test-bucket", "test-prefix") .await .unwrap(); assert_eq!(19, size); replay_client.assert_requests_match(&[]); } }
The following code example shows how to upload or download large files to and from Amazon S3.
For more information, see Uploading an object using multipart upload.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository
. use std::fs::File; use std::io::prelude::*; use std::path::Path; use aws_config::meta::region::RegionProviderChain; use aws_sdk_s3::error::DisplayErrorContext; use aws_sdk_s3::operation::{ create_multipart_upload::CreateMultipartUploadOutput, get_object::GetObjectOutput, }; use aws_sdk_s3::types::{CompletedMultipartUpload, CompletedPart}; use aws_sdk_s3::{config::Region, Client as S3Client}; use aws_smithy_types::byte_stream::{ByteStream, Length}; use rand::distributions::Alphanumeric; use rand::{thread_rng, Rng}; use s3_code_examples::error::S3ExampleError; use std::process; use uuid::Uuid; //In bytes, minimum chunk size of 5MB. Increase CHUNK_SIZE to send larger chunks. const CHUNK_SIZE: u64 = 1024 * 1024 * 5; const MAX_CHUNKS: u64 = 10000; #[tokio::main] pub async fn main() { if let Err(err) = run_example().await { eprintln!("Error: {}", DisplayErrorContext(err)); process::exit(1); } } async fn run_example() -> Result<(), S3ExampleError> { let shared_config = aws_config::load_from_env().await; let client = S3Client::new(&shared_config); let bucket_name = format!("amzn-s3-demo-bucket-{}", Uuid::new_v4()); let region_provider = RegionProviderChain::first_try(Region::new("us-west-2")); let region = region_provider.region().await.unwrap(); s3_code_examples::create_bucket(&client, &bucket_name, ®ion).await?; let key = "sample.txt".to_string(); // Create a multipart upload. Use UploadPart and CompleteMultipartUpload to // upload the file. let multipart_upload_res: CreateMultipartUploadOutput = client .create_multipart_upload() .bucket(&bucket_name) .key(&key) .send() .await?; let upload_id = multipart_upload_res.upload_id().ok_or(S3ExampleError::new( "Missing upload_id after CreateMultipartUpload", ))?; //Create a file of random characters for the upload. let mut file = File::create(&key).expect("Could not create sample file."); // Loop until the file is 5 chunks. while file.metadata().unwrap().len() <= CHUNK_SIZE * 4 { let rand_string: String = thread_rng() .sample_iter(&Alphanumeric) .take(256) .map(char::from) .collect(); let return_string: String = "\n".to_string(); file.write_all(rand_string.as_ref()) .expect("Error writing to file."); file.write_all(return_string.as_ref()) .expect("Error writing to file."); } let path = Path::new(&key); let file_size = tokio::fs::metadata(path) .await .expect("it exists I swear") .len(); let mut chunk_count = (file_size / CHUNK_SIZE) + 1; let mut size_of_last_chunk = file_size % CHUNK_SIZE; if size_of_last_chunk == 0 { size_of_last_chunk = CHUNK_SIZE; chunk_count -= 1; } if file_size == 0 { return Err(S3ExampleError::new("Bad file size.")); } if chunk_count > MAX_CHUNKS { return Err(S3ExampleError::new( "Too many chunks! Try increasing your chunk size.", )); } let mut upload_parts: Vec<aws_sdk_s3::types::CompletedPart> = Vec::new(); for chunk_index in 0..chunk_count { let this_chunk = if chunk_count - 1 == chunk_index { size_of_last_chunk } else { CHUNK_SIZE }; let stream = ByteStream::read_from() .path(path) .offset(chunk_index * CHUNK_SIZE) .length(Length::Exact(this_chunk)) .build() .await .unwrap(); // Chunk index needs to start at 0, but part numbers start at 1. let part_number = (chunk_index as i32) + 1; let upload_part_res = client .upload_part() .key(&key) .bucket(&bucket_name) .upload_id(upload_id) .body(stream) .part_number(part_number) .send() .await?; upload_parts.push( CompletedPart::builder() .e_tag(upload_part_res.e_tag.unwrap_or_default()) .part_number(part_number) .build(), ); } // upload_parts: Vec<aws_sdk_s3::types::CompletedPart> let completed_multipart_upload: CompletedMultipartUpload = CompletedMultipartUpload::builder() .set_parts(Some(upload_parts)) .build(); let _complete_multipart_upload_res = client .complete_multipart_upload() .bucket(&bucket_name) .key(&key) .multipart_upload(completed_multipart_upload) .upload_id(upload_id) .send() .await?; let data: GetObjectOutput = s3_code_examples::download_object(&client, &bucket_name, &key).await?; let data_length: u64 = data .content_length() .unwrap_or_default() .try_into() .unwrap(); if file.metadata().unwrap().len() == data_length { println!("Data lengths match."); } else { println!("The data was not the same size!"); } s3_code_examples::clear_bucket(&client, &bucket_name) .await .expect("Error emptying bucket."); s3_code_examples::delete_bucket(&client, &bucket_name) .await .expect("Error deleting bucket."); Ok(()) }
Serverless examples
The following code example shows how to implement a Lambda function that receives an event triggered by uploading an object to an S3 bucket. The function retrieves the S3 bucket name and object key from the event parameter and calls the Amazon S3 API to retrieve and log the content type of the object.
- SDK for Rust
-
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples
repository. Consuming an S3 event with Lambda using Rust.
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3::{Client}; use lambda_runtime::{run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for {} and object {}", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) }