Amazon S3 examples using SDK for Rust - AWS SDK for Rust

Amazon S3 examples using SDK for Rust

The following code examples show you how to perform actions and implement common scenarios by using the AWS SDK for Rust with Amazon S3.

Actions are code excerpts from larger programs and must be run in context. While actions show you how to call individual service functions, you can see actions in context in their related scenarios and cross-service examples.

Scenarios are code examples that show you how to accomplish a specific task by calling multiple functions within the same service.

Each example includes a link to GitHub, where you can find instructions on how to set up and run the code in context.

Actions

The following code example shows how to use CompleteMultipartUpload.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

let _complete_multipart_upload_res = client .complete_multipart_upload() .bucket(&bucket_name) .key(&key) .multipart_upload(completed_multipart_upload) .upload_id(upload_id) .send() .await .unwrap();

The following code example shows how to use CopyObject.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

pub async fn copy_object( client: &Client, bucket_name: &str, object_key: &str, target_key: &str, ) -> Result<CopyObjectOutput, SdkError<CopyObjectError>> { let mut source_bucket_and_object: String = "".to_owned(); source_bucket_and_object.push_str(bucket_name); source_bucket_and_object.push('/'); source_bucket_and_object.push_str(object_key); client .copy_object() .copy_source(source_bucket_and_object) .bucket(bucket_name) .key(target_key) .send() .await }
  • For API details, see CopyObject in AWS SDK for Rust API reference.

The following code example shows how to use CreateBucket.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

pub async fn create_bucket( client: &Client, bucket_name: &str, region: &str, ) -> Result<CreateBucketOutput, SdkError<CreateBucketError>> { let constraint = BucketLocationConstraint::from(region); let cfg = CreateBucketConfiguration::builder() .location_constraint(constraint) .build(); client .create_bucket() .create_bucket_configuration(cfg) .bucket(bucket_name) .send() .await }
  • For API details, see CreateBucket in AWS SDK for Rust API reference.

The following code example shows how to use CreateMultipartUpload.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

let multipart_upload_res: CreateMultipartUploadOutput = client .create_multipart_upload() .bucket(&bucket_name) .key(&key) .send() .await .unwrap();

The following code example shows how to use DeleteBucket.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

pub async fn delete_bucket(client: &Client, bucket_name: &str) -> Result<(), Error> { client.delete_bucket().bucket(bucket_name).send().await?; println!("Bucket deleted"); Ok(()) }
  • For API details, see DeleteBucket in AWS SDK for Rust API reference.

The following code example shows how to use DeleteObject.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

async fn remove_object(client: &Client, bucket: &str, key: &str) -> Result<(), Error> { client .delete_object() .bucket(bucket) .key(key) .send() .await?; println!("Object deleted."); Ok(()) }
  • For API details, see DeleteObject in AWS SDK for Rust API reference.

The following code example shows how to use DeleteObjects.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

pub async fn delete_objects(client: &Client, bucket_name: &str) -> Result<Vec<String>, Error> { let objects = client.list_objects_v2().bucket(bucket_name).send().await?; let mut delete_objects: Vec<ObjectIdentifier> = vec![]; for obj in objects.contents() { let obj_id = ObjectIdentifier::builder() .set_key(Some(obj.key().unwrap().to_string())) .build() .map_err(Error::from)?; delete_objects.push(obj_id); } let return_keys = delete_objects.iter().map(|o| o.key.clone()).collect(); if !delete_objects.is_empty() { client .delete_objects() .bucket(bucket_name) .delete( Delete::builder() .set_objects(Some(delete_objects)) .build() .map_err(Error::from)?, ) .send() .await?; } let objects: ListObjectsV2Output = client.list_objects_v2().bucket(bucket_name).send().await?; eprintln!("{objects:?}"); match objects.key_count { Some(0) => Ok(return_keys), _ => Err(Error::unhandled( "There were still objects left in the bucket.", )), } }
  • For API details, see DeleteObjects in AWS SDK for Rust API reference.

The following code example shows how to use GetBucketLocation.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

async fn show_buckets(strict: bool, client: &Client, region: &str) -> Result<(), Error> { let resp = client.list_buckets().send().await?; let buckets = resp.buckets(); let num_buckets = buckets.len(); let mut in_region = 0; for bucket in buckets { if strict { let r = client .get_bucket_location() .bucket(bucket.name().unwrap_or_default()) .send() .await?; if r.location_constraint().unwrap().as_ref() == region { println!("{}", bucket.name().unwrap_or_default()); in_region += 1; } } else { println!("{}", bucket.name().unwrap_or_default()); } } println!(); if strict { println!( "Found {} buckets in the {} region out of a total of {} buckets.", in_region, region, num_buckets ); } else { println!("Found {} buckets in all regions.", num_buckets); } Ok(()) }

The following code example shows how to use GetObject.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

async fn get_object(client: Client, opt: Opt) -> Result<usize, anyhow::Error> { trace!("bucket: {}", opt.bucket); trace!("object: {}", opt.object); trace!("destination: {}", opt.destination.display()); let mut file = File::create(opt.destination.clone())?; let mut object = client .get_object() .bucket(opt.bucket) .key(opt.object) .send() .await?; let mut byte_count = 0_usize; while let Some(bytes) = object.body.try_next().await? { let bytes_len = bytes.len(); file.write_all(&bytes)?; trace!("Intermediate write of {bytes_len}"); byte_count += bytes_len; } Ok(byte_count) }
  • For API details, see GetObject in AWS SDK for Rust API reference.

The following code example shows how to use ListBuckets.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

async fn show_buckets(strict: bool, client: &Client, region: &str) -> Result<(), Error> { let resp = client.list_buckets().send().await?; let buckets = resp.buckets(); let num_buckets = buckets.len(); let mut in_region = 0; for bucket in buckets { if strict { let r = client .get_bucket_location() .bucket(bucket.name().unwrap_or_default()) .send() .await?; if r.location_constraint().unwrap().as_ref() == region { println!("{}", bucket.name().unwrap_or_default()); in_region += 1; } } else { println!("{}", bucket.name().unwrap_or_default()); } } println!(); if strict { println!( "Found {} buckets in the {} region out of a total of {} buckets.", in_region, region, num_buckets ); } else { println!("Found {} buckets in all regions.", num_buckets); } Ok(()) }
  • For API details, see ListBuckets in AWS SDK for Rust API reference.

The following code example shows how to use ListObjectVersions.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

async fn show_versions(client: &Client, bucket: &str) -> Result<(), Error> { let resp = client.list_object_versions().bucket(bucket).send().await?; for version in resp.versions() { println!("{}", version.key().unwrap_or_default()); println!(" version ID: {}", version.version_id().unwrap_or_default()); println!(); } Ok(()) }

The following code example shows how to use ListObjectsV2.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

pub async fn list_objects(client: &Client, bucket: &str) -> Result<(), Error> { let mut response = client .list_objects_v2() .bucket(bucket.to_owned()) .max_keys(10) // In this example, go 10 at a time. .into_paginator() .send(); while let Some(result) = response.next().await { match result { Ok(output) => { for object in output.contents() { println!(" - {}", object.key().unwrap_or("Unknown")); } } Err(err) => { eprintln!("{err:?}") } } } Ok(()) }
  • For API details, see ListObjectsV2 in AWS SDK for Rust API reference.

The following code example shows how to use PutObject.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

pub async fn upload_object( client: &Client, bucket_name: &str, file_name: &str, key: &str, ) -> Result<PutObjectOutput, SdkError<PutObjectError>> { let body = ByteStream::from_path(Path::new(file_name)).await; client .put_object() .bucket(bucket_name) .key(key) .body(body.unwrap()) .send() .await }
  • For API details, see PutObject in AWS SDK for Rust API reference.

The following code example shows how to use UploadPart.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

let upload_part_res = client .upload_part() .key(&key) .bucket(&bucket_name) .upload_id(upload_id) .body(stream) .part_number(part_number) .send() .await?; upload_parts.push( CompletedPart::builder() .e_tag(upload_part_res.e_tag.unwrap_or_default()) .part_number(part_number) .build(), ); let completed_multipart_upload: CompletedMultipartUpload = CompletedMultipartUpload::builder() .set_parts(Some(upload_parts)) .build();
  • For API details, see UploadPart in AWS SDK for Rust API reference.

Scenarios

The following code example shows how to create a presigned URL for Amazon S3 and upload an object.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Create presigning requests to GET and PUT S3 objects.

async fn get_object( client: &Client, bucket: &str, object: &str, expires_in: u64, ) -> Result<(), Box<dyn Error>> { let expires_in = Duration::from_secs(expires_in); let presigned_request = client .get_object() .bucket(bucket) .key(object) .presigned(PresigningConfig::expires_in(expires_in)?) .await?; println!("Object URI: {}", presigned_request.uri()); Ok(()) } async fn put_object( client: &Client, bucket: &str, object: &str, expires_in: u64, ) -> Result<(), Box<dyn Error>> { let expires_in = Duration::from_secs(expires_in); let presigned_request = client .put_object() .bucket(bucket) .key(object) .presigned(PresigningConfig::expires_in(expires_in)?) .await?; println!("Object URI: {}", presigned_request.uri()); Ok(()) }

The following code example shows how to read data from an object in an S3 bucket, but only if that bucket has not been modified since the last retrieval time.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

use aws_sdk_s3::{ error::SdkError, operation::head_object::HeadObjectError, primitives::{ByteStream, DateTime, DateTimeFormat}, Client, Error, }; use tracing::{error, warn}; const KEY: &str = "key"; const BODY: &str = "Hello, world!"; /// Demonstrate how `if-modified-since` reports that matching objects haven't /// changed. /// /// # Steps /// - Create a bucket. /// - Put an object in the bucket. /// - Get the bucket headers. /// - Get the bucket headers again but only if modified. /// - Delete the bucket. #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt::init(); // Get a new UUID to use when creating a unique bucket name. let uuid = uuid::Uuid::new_v4(); // Load the AWS configuration from the environment. let client = Client::new(&aws_config::load_from_env().await); // Generate a unique bucket name using the previously generated UUID. // Then create a new bucket with that name. let bucket_name = format!("if-modified-since-{uuid}"); client .create_bucket() .bucket(bucket_name.clone()) .send() .await?; // Create a new object in the bucket whose name is `KEY` and whose // contents are `BODY`. let put_object_output = client .put_object() .bucket(bucket_name.as_str()) .key(KEY) .body(ByteStream::from_static(BODY.as_bytes())) .send() .await; // If the `PutObject` succeeded, get the eTag string from it. Otherwise, // report an error and return an empty string. let e_tag_1 = match put_object_output { Ok(put_object) => put_object.e_tag.unwrap(), Err(err) => { error!("{err:?}"); String::new() } }; // Request the object's headers. let head_object_output = client .head_object() .bucket(bucket_name.as_str()) .key(KEY) .send() .await; // If the `HeadObject` request succeeded, create a tuple containing the // values of the headers `last-modified` and `etag`. If the request // failed, return the error in a tuple instead. let (last_modified, e_tag_2) = match head_object_output { Ok(head_object) => ( Ok(head_object.last_modified().cloned().unwrap()), head_object.e_tag.unwrap(), ), Err(err) => (Err(err), String::new()), }; warn!("last modified: {last_modified:?}"); assert_eq!( e_tag_1, e_tag_2, "PutObject and first GetObject had differing eTags" ); println!("First value of last_modified: {last_modified:?}"); println!("First tag: {}\n", e_tag_1); // Send a second `HeadObject` request. This time, the `if_modified_since` // option is specified, giving the `last_modified` value returned by the // first call to `HeadObject`. // // Since the object hasn't been changed, and there are no other objects in // the bucket, there should be no matching objects. let head_object_output = client .head_object() .bucket(bucket_name.as_str()) .key(KEY) .if_modified_since(last_modified.unwrap()) .send() .await; // If the `HeadObject` request succeeded, the result is a typle containing // the `last_modified` and `e_tag_1` properties. This is _not_ the expected // result. // // The _expected_ result of the second call to `HeadObject` is an // `SdkError::ServiceError` containing the HTTP error response. If that's // the case and the HTTP status is 304 (not modified), the output is a // tuple containing the values of the HTTP `last-modified` and `etag` // headers. // // If any other HTTP error occurred, the error is returned as an // `SdkError::ServiceError`. let (last_modified, e_tag_2): (Result<DateTime, SdkError<HeadObjectError>>, String) = match head_object_output { Ok(head_object) => ( Ok(head_object.last_modified().cloned().unwrap()), head_object.e_tag.unwrap(), ), Err(err) => match err { SdkError::ServiceError(err) => { // Get the raw HTTP response. If its status is 304, the // object has not changed. This is the expected code path. let http = err.raw(); match http.status().as_u16() { // If the HTTP status is 304: Not Modified, return a // tuple containing the values of the HTTP // `last-modified` and `etag` headers. 304 => ( Ok(DateTime::from_str( http.headers().get("last-modified").unwrap(), DateTimeFormat::HttpDate, ) .unwrap()), http.headers().get("etag").map(|t| t.into()).unwrap(), ), // Any other HTTP status code is returned as an // `SdkError::ServiceError`. _ => (Err(SdkError::ServiceError(err)), String::new()), } } // Any other kind of error is returned in a tuple containing the // error and an empty string. _ => (Err(err), String::new()), }, }; warn!("last modified: {last_modified:?}"); assert_eq!( e_tag_1, e_tag_2, "PutObject and second HeadObject had different eTags" ); println!("Second value of last modified: {last_modified:?}"); println!("Second tag: {}", e_tag_2); // Clean up by deleting the object and the bucket. client .delete_object() .bucket(bucket_name.as_str()) .key(KEY) .send() .await?; client .delete_bucket() .bucket(bucket_name.as_str()) .send() .await?; Ok(()) }
  • For API details, see GetObject in AWS SDK for Rust API reference.

The following code example shows how to:

  • Create a bucket and upload a file to it.

  • Download an object from a bucket.

  • Copy an object to a subfolder in a bucket.

  • List the objects in a bucket.

  • Delete the bucket objects and the bucket.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Code for the binary crate which runs the scenario.

use aws_config::meta::region::RegionProviderChain; use aws_sdk_s3::{config::Region, Client}; use s3_service::error::Error; use uuid::Uuid; #[tokio::main] async fn main() -> Result<(), Error> { let (region, client, bucket_name, file_name, key, target_key) = initialize_variables().await; if let Err(e) = run_s3_operations(region, client, bucket_name, file_name, key, target_key).await { println!("{:?}", e); }; Ok(()) } async fn initialize_variables() -> (Region, Client, String, String, String, String) { let region_provider = RegionProviderChain::first_try(Region::new("us-west-2")); let region = region_provider.region().await.unwrap(); let shared_config = aws_config::from_env().region(region_provider).load().await; let client = Client::new(&shared_config); let bucket_name = format!("doc-example-bucket-{}", Uuid::new_v4()); let file_name = "s3/testfile.txt".to_string(); let key = "test file key name".to_string(); let target_key = "target_key".to_string(); (region, client, bucket_name, file_name, key, target_key) } async fn run_s3_operations( region: Region, client: Client, bucket_name: String, file_name: String, key: String, target_key: String, ) -> Result<(), Error> { s3_service::create_bucket(&client, &bucket_name, region.as_ref()).await?; s3_service::upload_object(&client, &bucket_name, &file_name, &key).await?; let _object = s3_service::download_object(&client, &bucket_name, &key).await; s3_service::copy_object(&client, &bucket_name, &key, &target_key).await?; s3_service::list_objects(&client, &bucket_name).await?; s3_service::delete_objects(&client, &bucket_name).await?; s3_service::delete_bucket(&client, &bucket_name).await?; Ok(()) }

A library crate with common actions called by the binary.

use aws_sdk_s3::operation::{ copy_object::{CopyObjectError, CopyObjectOutput}, create_bucket::{CreateBucketError, CreateBucketOutput}, get_object::{GetObjectError, GetObjectOutput}, list_objects_v2::ListObjectsV2Output, put_object::{PutObjectError, PutObjectOutput}, }; use aws_sdk_s3::types::{ BucketLocationConstraint, CreateBucketConfiguration, Delete, ObjectIdentifier, }; use aws_sdk_s3::{error::SdkError, primitives::ByteStream, Client}; use error::Error; use std::path::Path; use std::str; pub mod error; pub async fn delete_bucket(client: &Client, bucket_name: &str) -> Result<(), Error> { client.delete_bucket().bucket(bucket_name).send().await?; println!("Bucket deleted"); Ok(()) } pub async fn delete_objects(client: &Client, bucket_name: &str) -> Result<Vec<String>, Error> { let objects = client.list_objects_v2().bucket(bucket_name).send().await?; let mut delete_objects: Vec<ObjectIdentifier> = vec![]; for obj in objects.contents() { let obj_id = ObjectIdentifier::builder() .set_key(Some(obj.key().unwrap().to_string())) .build() .map_err(Error::from)?; delete_objects.push(obj_id); } let return_keys = delete_objects.iter().map(|o| o.key.clone()).collect(); if !delete_objects.is_empty() { client .delete_objects() .bucket(bucket_name) .delete( Delete::builder() .set_objects(Some(delete_objects)) .build() .map_err(Error::from)?, ) .send() .await?; } let objects: ListObjectsV2Output = client.list_objects_v2().bucket(bucket_name).send().await?; eprintln!("{objects:?}"); match objects.key_count { Some(0) => Ok(return_keys), _ => Err(Error::unhandled( "There were still objects left in the bucket.", )), } } pub async fn list_objects(client: &Client, bucket: &str) -> Result<(), Error> { let mut response = client .list_objects_v2() .bucket(bucket.to_owned()) .max_keys(10) // In this example, go 10 at a time. .into_paginator() .send(); while let Some(result) = response.next().await { match result { Ok(output) => { for object in output.contents() { println!(" - {}", object.key().unwrap_or("Unknown")); } } Err(err) => { eprintln!("{err:?}") } } } Ok(()) } pub async fn copy_object( client: &Client, bucket_name: &str, object_key: &str, target_key: &str, ) -> Result<CopyObjectOutput, SdkError<CopyObjectError>> { let mut source_bucket_and_object: String = "".to_owned(); source_bucket_and_object.push_str(bucket_name); source_bucket_and_object.push('/'); source_bucket_and_object.push_str(object_key); client .copy_object() .copy_source(source_bucket_and_object) .bucket(bucket_name) .key(target_key) .send() .await } pub async fn download_object( client: &Client, bucket_name: &str, key: &str, ) -> Result<GetObjectOutput, SdkError<GetObjectError>> { client .get_object() .bucket(bucket_name) .key(key) .send() .await } pub async fn upload_object( client: &Client, bucket_name: &str, file_name: &str, key: &str, ) -> Result<PutObjectOutput, SdkError<PutObjectError>> { let body = ByteStream::from_path(Path::new(file_name)).await; client .put_object() .bucket(bucket_name) .key(key) .body(body.unwrap()) .send() .await } pub async fn create_bucket( client: &Client, bucket_name: &str, region: &str, ) -> Result<CreateBucketOutput, SdkError<CreateBucketError>> { let constraint = BucketLocationConstraint::from(region); let cfg = CreateBucketConfiguration::builder() .location_constraint(constraint) .build(); client .create_bucket() .create_bucket_configuration(cfg) .bucket(bucket_name) .send() .await }

The following code example shows how to examples for best-practice techniques when writing unit and integration tests using an AWS SDK.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Cargo.toml for testing examples.

[package] name = "testing-examples" version = "0.1.0" authors = [ "John Disanti <jdisanti@amazon.com>", "Doug Schwartz <dougsch@amazon.com>", ] edition = "2021" # snippet-start:[testing.rust.Cargo.toml] [dependencies] async-trait = "0.1.51" aws-config = { version = "1.0.1", features = ["behavior-version-latest"] } aws-credential-types = { version = "1.0.1", features = [ "hardcoded-credentials", ] } aws-sdk-s3 = { version = "1.4.0" } aws-smithy-types = { version = "1.0.1" } aws-smithy-runtime = { version = "1.0.1", features = ["test-util"] } aws-smithy-runtime-api = { version = "1.0.1", features = ["test-util"] } aws-types = { version = "1.0.1" } clap = { version = "~4.4", features = ["derive"] } http = "0.2.9" mockall = "0.11.4" serde_json = "1" tokio = { version = "1.20.1", features = ["full"] } tracing-subscriber = { version = "0.3.15", features = ["env-filter"] } # snippet-end:[testing.rust.Cargo.toml] [[bin]] name = "main" path = "src/main.rs"

Unit testing example using automock and a service wrapper.

// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 // snippet-start:[testing.rust.wrapper] // snippet-start:[testing.rust.wrapper-uses] use aws_sdk_s3 as s3; #[allow(unused_imports)] use mockall::automock; use s3::operation::list_objects_v2::{ListObjectsV2Error, ListObjectsV2Output}; // snippet-end:[testing.rust.wrapper-uses] // snippet-start:[testing.rust.wrapper-which-impl] #[cfg(test)] pub use MockS3Impl as S3; #[cfg(not(test))] pub use S3Impl as S3; // snippet-end:[testing.rust.wrapper-which-impl] // snippet-start:[testing.rust.wrapper-impl] #[allow(dead_code)] pub struct S3Impl { inner: s3::Client, } #[cfg_attr(test, automock)] impl S3Impl { #[allow(dead_code)] pub fn new(inner: s3::Client) -> Self { Self { inner } } #[allow(dead_code)] pub async fn list_objects( &self, bucket: &str, prefix: &str, continuation_token: Option<String>, ) -> Result<ListObjectsV2Output, s3::error::SdkError<ListObjectsV2Error>> { self.inner .list_objects_v2() .bucket(bucket) .prefix(prefix) .set_continuation_token(continuation_token) .send() .await } } // snippet-end:[testing.rust.wrapper-impl] // snippet-start:[testing.rust.wrapper-func] #[allow(dead_code)] pub async fn determine_prefix_file_size( // Now we take a reference to our trait object instead of the S3 client // s3_list: ListObjectsService, s3_list: S3, bucket: &str, prefix: &str, ) -> Result<usize, s3::Error> { let mut next_token: Option<String> = None; let mut total_size_bytes = 0; loop { let result = s3_list .list_objects(bucket, prefix, next_token.take()) .await?; // Add up the file sizes we got back for object in result.contents() { total_size_bytes += object.size().unwrap_or(0) as usize; } // Handle pagination, and break the loop if there are no more pages next_token = result.next_continuation_token.clone(); if next_token.is_none() { break; } } Ok(total_size_bytes) } // snippet-end:[testing.rust.wrapper-func] // snippet-end:[testing.rust.wrapper] // snippet-start:[testing.rust.wrapper-test-mod] #[cfg(test)] mod test { // snippet-start:[testing.rust.wrapper-tests] use super::*; use mockall::predicate::eq; // snippet-start:[testing.rust.wrapper-test-single] #[tokio::test] async fn test_single_page() { let mut mock = MockS3Impl::default(); mock.expect_list_objects() .with(eq("test-bucket"), eq("test-prefix"), eq(None)) .return_once(|_, _, _| { Ok(ListObjectsV2Output::builder() .set_contents(Some(vec![ // Mock content for ListObjectsV2 response s3::types::Object::builder().size(5).build(), s3::types::Object::builder().size(2).build(), ])) .build()) }); // Run the code we want to test with it let size = determine_prefix_file_size(mock, "test-bucket", "test-prefix") .await .unwrap(); // Verify we got the correct total size back assert_eq!(7, size); } // snippet-end:[testing.rust.wrapper-test-single] // snippet-start:[testing.rust.wrapper-test-multiple] #[tokio::test] async fn test_multiple_pages() { // Create the Mock instance with two pages of objects now let mut mock = MockS3Impl::default(); mock.expect_list_objects() .with(eq("test-bucket"), eq("test-prefix"), eq(None)) .return_once(|_, _, _| { Ok(ListObjectsV2Output::builder() .set_contents(Some(vec![ // Mock content for ListObjectsV2 response s3::types::Object::builder().size(5).build(), s3::types::Object::builder().size(2).build(), ])) .set_next_continuation_token(Some("next".to_string())) .build()) }); mock.expect_list_objects() .with( eq("test-bucket"), eq("test-prefix"), eq(Some("next".to_string())), ) .return_once(|_, _, _| { Ok(ListObjectsV2Output::builder() .set_contents(Some(vec![ // Mock content for ListObjectsV2 response s3::types::Object::builder().size(3).build(), s3::types::Object::builder().size(9).build(), ])) .build()) }); // Run the code we want to test with it let size = determine_prefix_file_size(mock, "test-bucket", "test-prefix") .await .unwrap(); assert_eq!(19, size); } // snippet-end:[testing.rust.wrapper-test-multiple] // snippet-end:[testing.rust.wrapper-tests] } // snippet-end:[testing.rust.wrapper-test-mod]

Integration testing example using StaticReplayClient.

// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 // snippet-start:[testing.rust.replay-uses] use aws_sdk_s3 as s3; // snippet-end:[testing.rust.replay-uses] #[allow(dead_code)] // snippet-start:[testing.rust.replay] pub async fn determine_prefix_file_size( // Now we take a reference to our trait object instead of the S3 client // s3_list: ListObjectsService, s3: s3::Client, bucket: &str, prefix: &str, ) -> Result<usize, s3::Error> { let mut next_token: Option<String> = None; let mut total_size_bytes = 0; loop { let result = s3 .list_objects_v2() .prefix(prefix) .bucket(bucket) .set_continuation_token(next_token.take()) .send() .await?; // Add up the file sizes we got back for object in result.contents() { total_size_bytes += object.size().unwrap_or(0) as usize; } // Handle pagination, and break the loop if there are no more pages next_token = result.next_continuation_token.clone(); if next_token.is_none() { break; } } Ok(total_size_bytes) } // snippet-end:[testing.rust.replay] #[allow(dead_code)] // snippet-start:[testing.rust.replay-tests] // snippet-start:[testing.rust.replay-make-credentials] fn make_s3_test_credentials() -> s3::config::Credentials { s3::config::Credentials::new( "ATESTCLIENT", "astestsecretkey", Some("atestsessiontoken".to_string()), None, "", ) } // snippet-end:[testing.rust.replay-make-credentials] // snippet-start:[testing.rust.replay-test-module] #[cfg(test)] mod test { // snippet-start:[testing.rust.replay-test-single] use super::*; use aws_config::BehaviorVersion; use aws_sdk_s3 as s3; use aws_smithy_runtime::client::http::test_util::{ReplayEvent, StaticReplayClient}; use aws_smithy_types::body::SdkBody; #[tokio::test] async fn test_single_page() { let page_1 = ReplayEvent::new( http::Request::builder() .method("GET") .uri("https://test-bucket.s3.us-east-1.amazonaws.com/?list-type=2&prefix=test-prefix") .body(SdkBody::empty()) .unwrap(), http::Response::builder() .status(200) .body(SdkBody::from(include_str!("./testing/response_1.xml"))) .unwrap(), ); let replay_client = StaticReplayClient::new(vec![page_1]); let client: s3::Client = s3::Client::from_conf( s3::Config::builder() .behavior_version(BehaviorVersion::latest()) .credentials_provider(make_s3_test_credentials()) .region(s3::config::Region::new("us-east-1")) .http_client(replay_client.clone()) .build(), ); // Run the code we want to test with it let size = determine_prefix_file_size(client, "test-bucket", "test-prefix") .await .unwrap(); // Verify we got the correct total size back assert_eq!(7, size); replay_client.assert_requests_match(&[]); } // snippet-end:[testing.rust.replay-test-single] // snippet-start:[testing.rust.replay-test-multiple] #[tokio::test] async fn test_multiple_pages() { // snippet-start:[testing.rust.replay-create-replay] let page_1 = ReplayEvent::new( http::Request::builder() .method("GET") .uri("https://test-bucket.s3.us-east-1.amazonaws.com/?list-type=2&prefix=test-prefix") .body(SdkBody::empty()) .unwrap(), http::Response::builder() .status(200) .body(SdkBody::from(include_str!("./testing/response_multi_1.xml"))) .unwrap(), ); let page_2 = ReplayEvent::new( http::Request::builder() .method("GET") .uri("https://test-bucket.s3.us-east-1.amazonaws.com/?list-type=2&prefix=test-prefix&continuation-token=next") .body(SdkBody::empty()) .unwrap(), http::Response::builder() .status(200) .body(SdkBody::from(include_str!("./testing/response_multi_2.xml"))) .unwrap(), ); let replay_client = StaticReplayClient::new(vec![page_1, page_2]); // snippet-end:[testing.rust.replay-create-replay] // snippet-start:[testing.rust.replay-create-client] let client: s3::Client = s3::Client::from_conf( s3::Config::builder() .behavior_version(BehaviorVersion::latest()) .credentials_provider(make_s3_test_credentials()) .region(s3::config::Region::new("us-east-1")) .http_client(replay_client.clone()) .build(), ); // snippet-end:[testing.rust.replay-create-client] // Run the code we want to test with it // snippet-start:[testing.rust.replay-test-and-verify] let size = determine_prefix_file_size(client, "test-bucket", "test-prefix") .await .unwrap(); assert_eq!(19, size); replay_client.assert_requests_match(&[]); // snippet-end:[testing.rust.replay-test-and-verify] } // snippet-end:[testing.rust.replay-test-multiple] } // snippet-end:[testing.rust.replay-tests] // snippet-end:[testing.rust.replay-test-module]

The following code example shows how to upload or download large files to and from Amazon S3.

For more information, see Uploading an object using multipart upload.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

use std::fs::File; use std::io::prelude::*; use std::path::Path; use aws_config::meta::region::RegionProviderChain; use aws_sdk_s3::error::DisplayErrorContext; use aws_sdk_s3::operation::{ create_multipart_upload::CreateMultipartUploadOutput, get_object::GetObjectOutput, }; use aws_sdk_s3::types::{CompletedMultipartUpload, CompletedPart}; use aws_sdk_s3::{config::Region, Client as S3Client}; use aws_smithy_types::byte_stream::{ByteStream, Length}; use rand::distributions::Alphanumeric; use rand::{thread_rng, Rng}; use s3_service::error::Error; use std::process; use uuid::Uuid; //In bytes, minimum chunk size of 5MB. Increase CHUNK_SIZE to send larger chunks. const CHUNK_SIZE: u64 = 1024 * 1024 * 5; const MAX_CHUNKS: u64 = 10000; #[tokio::main] pub async fn main() { if let Err(err) = run_example().await { eprintln!("Error: {}", DisplayErrorContext(err)); process::exit(1); } } async fn run_example() -> Result<(), Error> { let shared_config = aws_config::load_from_env().await; let client = S3Client::new(&shared_config); let bucket_name = format!("doc-example-bucket-{}", Uuid::new_v4()); let region_provider = RegionProviderChain::first_try(Region::new("us-west-2")); let region = region_provider.region().await.unwrap(); s3_service::create_bucket(&client, &bucket_name, region.as_ref()).await?; let key = "sample.txt".to_string(); let multipart_upload_res: CreateMultipartUploadOutput = client .create_multipart_upload() .bucket(&bucket_name) .key(&key) .send() .await .unwrap(); let upload_id = multipart_upload_res.upload_id().unwrap(); //Create a file of random characters for the upload. let mut file = File::create(&key).expect("Could not create sample file."); // Loop until the file is 5 chunks. while file.metadata().unwrap().len() <= CHUNK_SIZE * 4 { let rand_string: String = thread_rng() .sample_iter(&Alphanumeric) .take(256) .map(char::from) .collect(); let return_string: String = "\n".to_string(); file.write_all(rand_string.as_ref()) .expect("Error writing to file."); file.write_all(return_string.as_ref()) .expect("Error writing to file."); } let path = Path::new(&key); let file_size = tokio::fs::metadata(path) .await .expect("it exists I swear") .len(); let mut chunk_count = (file_size / CHUNK_SIZE) + 1; let mut size_of_last_chunk = file_size % CHUNK_SIZE; if size_of_last_chunk == 0 { size_of_last_chunk = CHUNK_SIZE; chunk_count -= 1; } if file_size == 0 { panic!("Bad file size."); } if chunk_count > MAX_CHUNKS { panic!("Too many chunks! Try increasing your chunk size.") } let mut upload_parts: Vec<CompletedPart> = Vec::new(); for chunk_index in 0..chunk_count { let this_chunk = if chunk_count - 1 == chunk_index { size_of_last_chunk } else { CHUNK_SIZE }; let stream = ByteStream::read_from() .path(path) .offset(chunk_index * CHUNK_SIZE) .length(Length::Exact(this_chunk)) .build() .await .unwrap(); //Chunk index needs to start at 0, but part numbers start at 1. let part_number = (chunk_index as i32) + 1; let upload_part_res = client .upload_part() .key(&key) .bucket(&bucket_name) .upload_id(upload_id) .body(stream) .part_number(part_number) .send() .await?; upload_parts.push( CompletedPart::builder() .e_tag(upload_part_res.e_tag.unwrap_or_default()) .part_number(part_number) .build(), ); } let completed_multipart_upload: CompletedMultipartUpload = CompletedMultipartUpload::builder() .set_parts(Some(upload_parts)) .build(); let _complete_multipart_upload_res = client .complete_multipart_upload() .bucket(&bucket_name) .key(&key) .multipart_upload(completed_multipart_upload) .upload_id(upload_id) .send() .await .unwrap(); let data: GetObjectOutput = s3_service::download_object(&client, &bucket_name, &key).await?; let data_length: u64 = data .content_length() .unwrap_or_default() .try_into() .unwrap(); if file.metadata().unwrap().len() == data_length { println!("Data lengths match."); } else { println!("The data was not the same size!"); } s3_service::delete_objects(&client, &bucket_name) .await .expect("Error emptying bucket."); s3_service::delete_bucket(&client, &bucket_name) .await .expect("Error deleting bucket."); Ok(()) }

Serverless examples

The following code example shows how to implement a Lambda function that receives an event triggered by uploading an object to an S3 bucket. The function retrieves the S3 bucket name and object key from the event parameter and calls the Amazon S3 API to retrieve and log the content type of the object.

SDK for Rust
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository.

Consuming an S3 event with Lambda using Rust.

// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3::{Client}; use lambda_runtime::{run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for {} and object {}", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) }