Amazon S3 examples using SDK for Java 2.x - AWS SDK for Java 2.x

Amazon S3 examples using SDK for Java 2.x

The following code examples show you how to perform actions and implement common scenarios by using the AWS SDK for Java 2.x with Amazon S3.

Actions are code excerpts from larger programs and must be run in context. While actions show you how to call individual service functions, you can see actions in context in their related scenarios and cross-service examples.

Scenarios are code examples that show you how to accomplish a specific task by calling multiple functions within the same service.

Each example includes a link to GitHub, where you can find instructions on how to set up and run the code in context.

Get started

The following code examples show how to get started using Amazon S3.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.Bucket; import software.amazon.awssdk.services.s3.model.ListBucketsResponse; import software.amazon.awssdk.services.s3.model.S3Exception; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class HelloS3 { public static void main(String[] args) { Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); listBuckets(s3); } public static void listBuckets(S3Client s3) { try { ListBucketsResponse response = s3.listBuckets(); List<Bucket> bucketList = response.buckets(); bucketList.forEach(bucket -> { System.out.println("Bucket Name: " + bucket.name()); }); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }
  • For API details, see ListBuckets in AWS SDK for Java 2.x API Reference.

Actions

The following code example shows how to use CopyObject.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Copy an object using an S3Client.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.CopyObjectRequest; import software.amazon.awssdk.services.s3.model.CopyObjectResponse; import software.amazon.awssdk.services.s3.model.S3Exception; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class CopyObject { public static void main(String[] args) { final String usage = """ Usage: <objectKey> <fromBucket> <toBucket> Where: objectKey - The name of the object (for example, book.pdf). fromBucket - The S3 bucket name that contains the object (for example, bucket1). toBucket - The S3 bucket to copy the object to (for example, bucket2). """; if (args.length != 3) { System.out.println(usage); System.exit(1); } String objectKey = args[0]; String fromBucket = args[1]; String toBucket = args[2]; System.out.format("Copying object %s from bucket %s to %s\n", objectKey, fromBucket, toBucket); Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); copyBucketObject(s3, fromBucket, objectKey, toBucket); s3.close(); } public static String copyBucketObject(S3Client s3, String fromBucket, String objectKey, String toBucket) { CopyObjectRequest copyReq = CopyObjectRequest.builder() .sourceBucket(fromBucket) .sourceKey(objectKey) .destinationBucket(toBucket) .destinationKey(objectKey) .build(); try { CopyObjectResponse copyRes = s3.copyObject(copyReq); return copyRes.copyObjectResult().toString(); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } return ""; } }

Use an S3TransferManager to copy an object from one bucket to another. View the complete file and test.

import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.core.sync.RequestBody; import software.amazon.awssdk.services.s3.model.CopyObjectRequest; import software.amazon.awssdk.transfer.s3.S3TransferManager; import software.amazon.awssdk.transfer.s3.model.CompletedCopy; import software.amazon.awssdk.transfer.s3.model.Copy; import software.amazon.awssdk.transfer.s3.model.CopyRequest; import java.util.UUID; public String copyObject(S3TransferManager transferManager, String bucketName, String key, String destinationBucket, String destinationKey) { CopyObjectRequest copyObjectRequest = CopyObjectRequest.builder() .sourceBucket(bucketName) .sourceKey(key) .destinationBucket(destinationBucket) .destinationKey(destinationKey) .build(); CopyRequest copyRequest = CopyRequest.builder() .copyObjectRequest(copyObjectRequest) .build(); Copy copy = transferManager.copy(copyRequest); CompletedCopy completedCopy = copy.completionFuture().join(); return completedCopy.response().copyObjectResult().eTag(); }
  • For API details, see CopyObject in AWS SDK for Java 2.x API Reference.

The following code example shows how to use CreateBucket.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Create a bucket.

import software.amazon.awssdk.core.waiters.WaiterResponse; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.CreateBucketRequest; import software.amazon.awssdk.services.s3.model.HeadBucketRequest; import software.amazon.awssdk.services.s3.model.HeadBucketResponse; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.waiters.S3Waiter; import java.net.URISyntaxException; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class CreateBucket { public static void main(String[] args) throws URISyntaxException { final String usage = """ Usage: <bucketName>\s Where: bucketName - The name of the bucket to create. The bucket name must be unique, or an error occurs. """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; System.out.format("Creating a bucket named %s\n", bucketName); Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); createBucket(s3, bucketName); s3.close(); } public static void createBucket(S3Client s3Client, String bucketName) { try { S3Waiter s3Waiter = s3Client.waiter(); CreateBucketRequest bucketRequest = CreateBucketRequest.builder() .bucket(bucketName) .build(); s3Client.createBucket(bucketRequest); HeadBucketRequest bucketRequestWait = HeadBucketRequest.builder() .bucket(bucketName) .build(); // Wait until the bucket is created and print out the response. WaiterResponse<HeadBucketResponse> waiterResponse = s3Waiter.waitUntilBucketExists(bucketRequestWait); waiterResponse.matched().response().ifPresent(System.out::println); System.out.println(bucketName + " is ready"); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }

Create a bucket with object lock enabled.

// Create a new Amazon S3 bucket with object lock options. public void createBucketWithLockOptions(boolean enableObjectLock, String bucketName) { S3Waiter s3Waiter = getClient().waiter(); CreateBucketRequest bucketRequest = CreateBucketRequest.builder() .bucket(bucketName) .objectLockEnabledForBucket(enableObjectLock) .build(); getClient().createBucket(bucketRequest); HeadBucketRequest bucketRequestWait = HeadBucketRequest.builder() .bucket(bucketName) .build(); // Wait until the bucket is created and print out the response. s3Waiter.waitUntilBucketExists(bucketRequestWait); System.out.println(bucketName + " is ready"); }
  • For API details, see CreateBucket in AWS SDK for Java 2.x API Reference.

The following code example shows how to use DeleteBucket.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

DeleteBucketRequest deleteBucketRequest = DeleteBucketRequest.builder() .bucket(bucket) .build(); s3.deleteBucket(deleteBucketRequest); s3.close();
  • For API details, see DeleteBucket in AWS SDK for Java 2.x API Reference.

The following code example shows how to use DeleteBucketPolicy.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.DeleteBucketPolicyRequest; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DeleteBucketPolicy { public static void main(String[] args) { final String usage = """ Usage: <bucketName> Where: bucketName - The Amazon S3 bucket to delete the policy from (for example, bucket1)."""; if (args.length != 1) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; System.out.format("Deleting policy from bucket: \"%s\"\n\n", bucketName); Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); deleteS3BucketPolicy(s3, bucketName); s3.close(); } // Delete the bucket policy. public static void deleteS3BucketPolicy(S3Client s3, String bucketName) { DeleteBucketPolicyRequest delReq = DeleteBucketPolicyRequest.builder() .bucket(bucketName) .build(); try { s3.deleteBucketPolicy(delReq); System.out.println("Done!"); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }

The following code example shows how to use DeleteBucketWebsite.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.DeleteBucketWebsiteRequest; import software.amazon.awssdk.services.s3.model.S3Exception; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DeleteWebsiteConfiguration { public static void main(String[] args) { final String usage = """ Usage: <bucketName> Where: bucketName - The Amazon S3 bucket to delete the website configuration from. """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; System.out.format("Deleting website configuration for Amazon S3 bucket: %s\n", bucketName); Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); deleteBucketWebsiteConfig(s3, bucketName); System.out.println("Done!"); s3.close(); } public static void deleteBucketWebsiteConfig(S3Client s3, String bucketName) { DeleteBucketWebsiteRequest delReq = DeleteBucketWebsiteRequest.builder() .bucket(bucketName) .build(); try { s3.deleteBucketWebsite(delReq); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.out.println("Failed to delete website configuration!"); System.exit(1); } } }

The following code example shows how to use DeleteObjects.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.core.sync.RequestBody; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.PutObjectRequest; import software.amazon.awssdk.services.s3.model.ObjectIdentifier; import software.amazon.awssdk.services.s3.model.Delete; import software.amazon.awssdk.services.s3.model.DeleteObjectsRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import java.util.ArrayList; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class DeleteMultiObjects { public static void main(String[] args) { final String usage = """ Usage: <bucketName> Where: bucketName - the Amazon S3 bucket name. """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); deleteBucketObjects(s3, bucketName); s3.close(); } public static void deleteBucketObjects(S3Client s3, String bucketName) { // Upload three sample objects to the specfied Amazon S3 bucket. ArrayList<ObjectIdentifier> keys = new ArrayList<>(); PutObjectRequest putOb; ObjectIdentifier objectId; for (int i = 0; i < 3; i++) { String keyName = "delete object example " + i; objectId = ObjectIdentifier.builder() .key(keyName) .build(); putOb = PutObjectRequest.builder() .bucket(bucketName) .key(keyName) .build(); s3.putObject(putOb, RequestBody.fromString(keyName)); keys.add(objectId); } System.out.println(keys.size() + " objects successfully created."); // Delete multiple objects in one request. Delete del = Delete.builder() .objects(keys) .build(); try { DeleteObjectsRequest multiObjectDeleteRequest = DeleteObjectsRequest.builder() .bucket(bucketName) .delete(del) .build(); s3.deleteObjects(multiObjectDeleteRequest); System.out.println("Multiple objects are deleted!"); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }
  • For API details, see DeleteObjects in AWS SDK for Java 2.x API Reference.

The following code example shows how to use GetBucketAcl.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.GetObjectAclRequest; import software.amazon.awssdk.services.s3.model.GetObjectAclResponse; import software.amazon.awssdk.services.s3.model.Grant; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class GetAcl { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <objectKey> Where: bucketName - The Amazon S3 bucket to get the access control list (ACL) for. objectKey - The object to get the ACL for.\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String objectKey = args[1]; System.out.println("Retrieving ACL for object: " + objectKey); System.out.println("in bucket: " + bucketName); Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); getBucketACL(s3, objectKey, bucketName); s3.close(); System.out.println("Done!"); } public static String getBucketACL(S3Client s3, String objectKey, String bucketName) { try { GetObjectAclRequest aclReq = GetObjectAclRequest.builder() .bucket(bucketName) .key(objectKey) .build(); GetObjectAclResponse aclRes = s3.getObjectAcl(aclReq); List<Grant> grants = aclRes.grants(); String grantee = ""; for (Grant grant : grants) { System.out.format(" %s: %s\n", grant.grantee().id(), grant.permission()); grantee = grant.grantee().id(); } return grantee; } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } return ""; } }
  • For API details, see GetBucketAcl in AWS SDK for Java 2.x API Reference.

The following code example shows how to use GetBucketPolicy.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.GetBucketPolicyRequest; import software.amazon.awssdk.services.s3.model.GetBucketPolicyResponse; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class GetBucketPolicy { public static void main(String[] args) { final String usage = """ Usage: <bucketName> Where: bucketName - The Amazon S3 bucket to get the policy from. """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; System.out.format("Getting policy for bucket: \"%s\"\n\n", bucketName); Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); String polText = getPolicy(s3, bucketName); System.out.println("Policy Text: " + polText); s3.close(); } public static String getPolicy(S3Client s3, String bucketName) { String policyText; System.out.format("Getting policy for bucket: \"%s\"\n\n", bucketName); GetBucketPolicyRequest policyReq = GetBucketPolicyRequest.builder() .bucket(bucketName) .build(); try { GetBucketPolicyResponse policyRes = s3.getBucketPolicy(policyReq); policyText = policyRes.policy(); return policyText; } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } return ""; } }
  • For API details, see GetBucketPolicy in AWS SDK for Java 2.x API Reference.

The following code example shows how to use GetObject.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Read data as a byte array using an S3Client.

import software.amazon.awssdk.core.ResponseBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.GetObjectRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.model.GetObjectResponse; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; import java.io.OutputStream; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class GetObjectData { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <keyName> <path> Where: bucketName - The Amazon S3 bucket name.\s keyName - The key name.\s path - The path where the file is written to.\s """; if (args.length != 3) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String keyName = args[1]; String path = args[2]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); getObjectBytes(s3, bucketName, keyName, path); } public static void getObjectBytes(S3Client s3, String bucketName, String keyName, String path) { try { GetObjectRequest objectRequest = GetObjectRequest .builder() .key(keyName) .bucket(bucketName) .build(); ResponseBytes<GetObjectResponse> objectBytes = s3.getObjectAsBytes(objectRequest); byte[] data = objectBytes.asByteArray(); // Write the data to a local file. File myFile = new File(path); OutputStream os = new FileOutputStream(myFile); os.write(data); System.out.println("Successfully obtained bytes from an S3 object"); os.close(); } catch (IOException ex) { ex.printStackTrace(); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }

Use an S3TransferManager to download an object in an S3 bucket to a local file. View the complete file and test.

import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.core.sync.RequestBody; import software.amazon.awssdk.transfer.s3.S3TransferManager; import software.amazon.awssdk.transfer.s3.model.CompletedFileDownload; import software.amazon.awssdk.transfer.s3.model.DownloadFileRequest; import software.amazon.awssdk.transfer.s3.model.FileDownload; import software.amazon.awssdk.transfer.s3.progress.LoggingTransferListener; import java.io.IOException; import java.net.URISyntaxException; import java.net.URL; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.Paths; import java.util.UUID; public Long downloadFile(S3TransferManager transferManager, String bucketName, String key, String downloadedFileWithPath) { DownloadFileRequest downloadFileRequest = DownloadFileRequest.builder() .getObjectRequest(b -> b.bucket(bucketName).key(key)) .destination(Paths.get(downloadedFileWithPath)) .build(); FileDownload downloadFile = transferManager.downloadFile(downloadFileRequest); CompletedFileDownload downloadResult = downloadFile.completionFuture().join(); logger.info("Content length [{}]", downloadResult.response().contentLength()); return downloadResult.response().contentLength(); }

Read tags that belong to an object using an S3Client.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.GetObjectTaggingRequest; import software.amazon.awssdk.services.s3.model.GetObjectTaggingResponse; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.model.Tag; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class GetObjectTags { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <keyName>\s Where: bucketName - The Amazon S3 bucket name.\s keyName - A key name that represents the object.\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String keyName = args[1]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); listTags(s3, bucketName, keyName); s3.close(); } public static void listTags(S3Client s3, String bucketName, String keyName) { try { GetObjectTaggingRequest getTaggingRequest = GetObjectTaggingRequest .builder() .key(keyName) .bucket(bucketName) .build(); GetObjectTaggingResponse tags = s3.getObjectTagging(getTaggingRequest); List<Tag> tagSet = tags.tagSet(); for (Tag tag : tagSet) { System.out.println(tag.key()); System.out.println(tag.value()); } } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }

Get a URL for an object using an S3Client.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.GetUrlRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import java.net.URL; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class GetObjectUrl { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <keyName>\s Where: bucketName - The Amazon S3 bucket name. keyName - A key name that represents the object.\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String keyName = args[1]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); getURL(s3, bucketName, keyName); s3.close(); } public static void getURL(S3Client s3, String bucketName, String keyName) { try { GetUrlRequest request = GetUrlRequest.builder() .bucket(bucketName) .key(keyName) .build(); URL url = s3.utilities().getUrl(request); System.out.println("The URL for " + keyName + " is " + url); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }

Get an object by using the S3Presigner client object using an S3Client.

import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.net.HttpURLConnection; import java.time.Duration; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.model.GetObjectRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.presigner.model.GetObjectPresignRequest; import software.amazon.awssdk.services.s3.presigner.model.PresignedGetObjectRequest; import software.amazon.awssdk.services.s3.presigner.S3Presigner; import software.amazon.awssdk.utils.IoUtils; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class GetObjectPresignedUrl { public static void main(String[] args) { final String USAGE = """ Usage: <bucketName> <keyName>\s Where: bucketName - The Amazon S3 bucket name.\s keyName - A key name that represents a text file.\s """; if (args.length != 2) { System.out.println(USAGE); System.exit(1); } String bucketName = args[0]; String keyName = args[1]; Region region = Region.US_EAST_1; S3Presigner presigner = S3Presigner.builder() .region(region) .build(); getPresignedUrl(presigner, bucketName, keyName); presigner.close(); } public static void getPresignedUrl(S3Presigner presigner, String bucketName, String keyName) { try { GetObjectRequest getObjectRequest = GetObjectRequest.builder() .bucket(bucketName) .key(keyName) .build(); GetObjectPresignRequest getObjectPresignRequest = GetObjectPresignRequest.builder() .signatureDuration(Duration.ofMinutes(60)) .getObjectRequest(getObjectRequest) .build(); PresignedGetObjectRequest presignedGetObjectRequest = presigner.presignGetObject(getObjectPresignRequest); String theUrl = presignedGetObjectRequest.url().toString(); System.out.println("Presigned URL: " + theUrl); HttpURLConnection connection = (HttpURLConnection) presignedGetObjectRequest.url().openConnection(); presignedGetObjectRequest.httpRequest().headers().forEach((header, values) -> { values.forEach(value -> { connection.addRequestProperty(header, value); }); }); // Send any request payload that the service needs (not needed when // isBrowserExecutable is true). if (presignedGetObjectRequest.signedPayload().isPresent()) { connection.setDoOutput(true); try (InputStream signedPayload = presignedGetObjectRequest.signedPayload().get().asInputStream(); OutputStream httpOutputStream = connection.getOutputStream()) { IoUtils.copy(signedPayload, httpOutputStream); } } // Download the result of executing the request. try (InputStream content = connection.getInputStream()) { System.out.println("Service returned response: "); IoUtils.copy(content, System.out); } } catch (S3Exception | IOException e) { e.getStackTrace(); } } }

Get an object by using a ResponseTransformer object and S3Client.

import software.amazon.awssdk.core.ResponseBytes; import software.amazon.awssdk.core.sync.ResponseTransformer; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.GetObjectRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.model.GetObjectResponse; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; import java.io.OutputStream; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class GetDataResponseTransformer { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <keyName> <path> Where: bucketName - The Amazon S3 bucket name.\s keyName - The key name.\s path - The path where the file is written to.\s """; if (args.length != 3) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String keyName = args[1]; String path = args[2]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); getObjectBytes(s3, bucketName, keyName, path); s3.close(); } public static void getObjectBytes(S3Client s3, String bucketName, String keyName, String path) { try { GetObjectRequest objectRequest = GetObjectRequest .builder() .key(keyName) .bucket(bucketName) .build(); ResponseBytes<GetObjectResponse> objectBytes = s3.getObject(objectRequest, ResponseTransformer.toBytes()); byte[] data = objectBytes.asByteArray(); // Write the data to a local file. File myFile = new File(path); OutputStream os = new FileOutputStream(myFile); os.write(data); System.out.println("Successfully obtained bytes from an S3 object"); os.close(); } catch (IOException ex) { ex.printStackTrace(); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }
  • For API details, see GetObject in AWS SDK for Java 2.x API Reference.

The following code example shows how to use GetObjectLegalHold.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

// Get the legal hold details for an S3 object. public ObjectLockLegalHold getObjectLegalHold(String bucketName, String objectKey) { try { GetObjectLegalHoldRequest legalHoldRequest = GetObjectLegalHoldRequest.builder() .bucket(bucketName) .key(objectKey) .build(); GetObjectLegalHoldResponse response = getClient().getObjectLegalHold(legalHoldRequest); System.out.println("Object legal hold for " + objectKey + " in " + bucketName + ":\n\tStatus: " + response.legalHold().status()); return response.legalHold(); } catch (S3Exception ex) { System.out.println("\tUnable to fetch legal hold: '" + ex.getMessage() + "'"); } return null; }

The following code example shows how to use GetObjectLockConfiguration.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

// Get the object lock configuration details for an S3 bucket. public void getBucketObjectLockConfiguration(String bucketName) { GetObjectLockConfigurationRequest objectLockConfigurationRequest = GetObjectLockConfigurationRequest.builder() .bucket(bucketName) .build(); GetObjectLockConfigurationResponse response = getClient().getObjectLockConfiguration(objectLockConfigurationRequest); System.out.println("Bucket object lock config for "+bucketName +": "); System.out.println("\tEnabled: "+response.objectLockConfiguration().objectLockEnabled()); System.out.println("\tRule: "+ response.objectLockConfiguration().rule().defaultRetention()); }

The following code example shows how to use GetObjectRetention.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

// Get the retention period for an S3 object. public ObjectLockRetention getObjectRetention(String bucketName, String key){ try { GetObjectRetentionRequest retentionRequest = GetObjectRetentionRequest.builder() .bucket(bucketName) .key(key) .build(); GetObjectRetentionResponse response = getClient().getObjectRetention(retentionRequest); System.out.println("tObject retention for "+key +" in "+ bucketName +": " + response.retention().mode() +" until "+ response.retention().retainUntilDate() +"."); return response.retention(); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); return null; } }

The following code example shows how to use HeadObject.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Determine the content type of an object.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.model.S3Exception; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class GetObjectContentType { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <keyName>> Where: bucketName - The Amazon S3 bucket name.\s keyName - The key name.\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String keyName = args[1]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); getContentType(s3, bucketName, keyName); s3.close(); } public static void getContentType(S3Client s3, String bucketName, String keyName) { try { HeadObjectRequest objectRequest = HeadObjectRequest.builder() .key(keyName) .bucket(bucketName) .build(); HeadObjectResponse objectHead = s3.headObject(objectRequest); String type = objectHead.contentType(); System.out.println("The object content type is " + type); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }

Get the restore status of an object.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.model.S3Exception; public class GetObjectRestoreStatus { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <keyName>\s Where: bucketName - The Amazon S3 bucket name.\s keyName - A key name that represents the object.\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String keyName = args[1]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); checkStatus(s3, bucketName, keyName); s3.close(); } public static void checkStatus(S3Client s3, String bucketName, String keyName) { try { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucketName) .key(keyName) .build(); HeadObjectResponse response = s3.headObject(headObjectRequest); System.out.println("The Amazon S3 object restoration status is " + response.restore()); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }
  • For API details, see HeadObject in AWS SDK for Java 2.x API Reference.

The following code example shows how to use ListBuckets.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.Bucket; import software.amazon.awssdk.services.s3.model.ListBucketsResponse; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class ListBuckets { public static void main(String[] args) { Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); listAllBuckets(s3); } public static void listAllBuckets(S3Client s3) { ListBucketsResponse response = s3.listBuckets(); List<Bucket> bucketList = response.buckets(); for (Bucket bucket: bucketList) { System.out.println("Bucket name "+bucket.name()); } } }
  • For API details, see ListBuckets in AWS SDK for Java 2.x API Reference.

The following code example shows how to use ListMultipartUploads.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.ListMultipartUploadsRequest; import software.amazon.awssdk.services.s3.model.ListMultipartUploadsResponse; import software.amazon.awssdk.services.s3.model.MultipartUpload; import software.amazon.awssdk.services.s3.model.S3Exception; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class ListMultipartUploads { public static void main(String[] args) { final String usage = """ Usage: <bucketName>\s Where: bucketName - The name of the Amazon S3 bucket where an in-progress multipart upload is occurring. """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); listUploads(s3, bucketName); s3.close(); } public static void listUploads(S3Client s3, String bucketName) { try { ListMultipartUploadsRequest listMultipartUploadsRequest = ListMultipartUploadsRequest.builder() .bucket(bucketName) .build(); ListMultipartUploadsResponse response = s3.listMultipartUploads(listMultipartUploadsRequest); List<MultipartUpload> uploads = response.uploads(); for (MultipartUpload upload : uploads) { System.out.println("Upload in progress: Key = \"" + upload.key() + "\", id = " + upload.uploadId()); } } catch (S3Exception e) { System.err.println(e.getMessage()); System.exit(1); } } }

The following code example shows how to use ListObjectsV2.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.ListObjectsRequest; import software.amazon.awssdk.services.s3.model.ListObjectsResponse; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.model.S3Object; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class ListObjects { public static void main(String[] args) { final String usage = """ Usage: <bucketName>\s Where: bucketName - The Amazon S3 bucket from which objects are read.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); listBucketObjects(s3, bucketName); s3.close(); } public static void listBucketObjects(S3Client s3, String bucketName) { try { ListObjectsRequest listObjects = ListObjectsRequest .builder() .bucket(bucketName) .build(); ListObjectsResponse res = s3.listObjects(listObjects); List<S3Object> objects = res.contents(); for (S3Object myValue : objects) { System.out.print("\n The name of the key is " + myValue.key()); System.out.print("\n The object is " + calKb(myValue.size()) + " KBs"); System.out.print("\n The owner is " + myValue.owner()); } } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } // convert bytes to kbs. private static long calKb(Long val) { return val / 1024; } }

List objects using pagination.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.ListObjectsV2Request; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.paginators.ListObjectsV2Iterable; public class ListObjectsPaginated { public static void main(String[] args) { final String usage = """ Usage: <bucketName>\s Where: bucketName - The Amazon S3 bucket from which objects are read.\s """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); listBucketObjects(s3, bucketName); s3.close(); } public static void listBucketObjects(S3Client s3, String bucketName) { try { ListObjectsV2Request listReq = ListObjectsV2Request.builder() .bucket(bucketName) .maxKeys(1) .build(); ListObjectsV2Iterable listRes = s3.listObjectsV2Paginator(listReq); listRes.stream() .flatMap(r -> r.contents().stream()) .forEach(content -> System.out.println(" Key: " + content.key() + " size = " + content.size())); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }
  • For API details, see ListObjectsV2 in AWS SDK for Java 2.x API Reference.

The following code example shows how to use PutBucketAcl.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.AccessControlPolicy; import software.amazon.awssdk.services.s3.model.Grant; import software.amazon.awssdk.services.s3.model.Permission; import software.amazon.awssdk.services.s3.model.PutBucketAclRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.model.Type; import java.util.ArrayList; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class SetAcl { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <id>\s Where: bucketName - The Amazon S3 bucket to grant permissions on.\s id - The ID of the owner of this bucket (you can get this value from the AWS Management Console). """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String id = args[1]; System.out.format("Setting access \n"); System.out.println(" in bucket: " + bucketName); Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); setBucketAcl(s3, bucketName, id); System.out.println("Done!"); s3.close(); } public static void setBucketAcl(S3Client s3, String bucketName, String id) { try { Grant ownerGrant = Grant.builder() .grantee(builder -> builder.id(id) .type(Type.CANONICAL_USER)) .permission(Permission.FULL_CONTROL) .build(); List<Grant> grantList2 = new ArrayList<>(); grantList2.add(ownerGrant); AccessControlPolicy acl = AccessControlPolicy.builder() .owner(builder -> builder.id(id)) .grants(grantList2) .build(); PutBucketAclRequest putAclReq = PutBucketAclRequest.builder() .bucket(bucketName) .accessControlPolicy(acl) .build(); s3.putBucketAcl(putAclReq); } catch (S3Exception e) { e.printStackTrace(); System.exit(1); } } }
  • For API details, see PutBucketAcl in AWS SDK for Java 2.x API Reference.

The following code example shows how to use PutBucketCors.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import java.util.ArrayList; import java.util.List; import software.amazon.awssdk.services.s3.model.GetBucketCorsRequest; import software.amazon.awssdk.services.s3.model.GetBucketCorsResponse; import software.amazon.awssdk.services.s3.model.DeleteBucketCorsRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.model.CORSRule; import software.amazon.awssdk.services.s3.model.CORSConfiguration; import software.amazon.awssdk.services.s3.model.PutBucketCorsRequest; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class S3Cors { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <accountId>\s Where: bucketName - The Amazon S3 bucket to upload an object into. accountId - The id of the account that owns the Amazon S3 bucket. """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String accountId = args[1]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); setCorsInformation(s3, bucketName, accountId); getBucketCorsInformation(s3, bucketName, accountId); deleteBucketCorsInformation(s3, bucketName, accountId); s3.close(); } public static void deleteBucketCorsInformation(S3Client s3, String bucketName, String accountId) { try { DeleteBucketCorsRequest bucketCorsRequest = DeleteBucketCorsRequest.builder() .bucket(bucketName) .expectedBucketOwner(accountId) .build(); s3.deleteBucketCors(bucketCorsRequest); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void getBucketCorsInformation(S3Client s3, String bucketName, String accountId) { try { GetBucketCorsRequest bucketCorsRequest = GetBucketCorsRequest.builder() .bucket(bucketName) .expectedBucketOwner(accountId) .build(); GetBucketCorsResponse corsResponse = s3.getBucketCors(bucketCorsRequest); List<CORSRule> corsRules = corsResponse.corsRules(); for (CORSRule rule : corsRules) { System.out.println("allowOrigins: " + rule.allowedOrigins()); System.out.println("AllowedMethod: " + rule.allowedMethods()); } } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void setCorsInformation(S3Client s3, String bucketName, String accountId) { List<String> allowMethods = new ArrayList<>(); allowMethods.add("PUT"); allowMethods.add("POST"); allowMethods.add("DELETE"); List<String> allowOrigins = new ArrayList<>(); allowOrigins.add("http://example.com"); try { // Define CORS rules. CORSRule corsRule = CORSRule.builder() .allowedMethods(allowMethods) .allowedOrigins(allowOrigins) .build(); List<CORSRule> corsRules = new ArrayList<>(); corsRules.add(corsRule); CORSConfiguration configuration = CORSConfiguration.builder() .corsRules(corsRules) .build(); PutBucketCorsRequest putBucketCorsRequest = PutBucketCorsRequest.builder() .bucket(bucketName) .corsConfiguration(configuration) .expectedBucketOwner(accountId) .build(); s3.putBucketCors(putBucketCorsRequest); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }
  • For API details, see PutBucketCors in AWS SDK for Java 2.x API Reference.

The following code example shows how to use PutBucketLifecycleConfiguration.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.LifecycleRuleFilter; import software.amazon.awssdk.services.s3.model.Transition; import software.amazon.awssdk.services.s3.model.GetBucketLifecycleConfigurationRequest; import software.amazon.awssdk.services.s3.model.GetBucketLifecycleConfigurationResponse; import software.amazon.awssdk.services.s3.model.DeleteBucketLifecycleRequest; import software.amazon.awssdk.services.s3.model.TransitionStorageClass; import software.amazon.awssdk.services.s3.model.LifecycleRule; import software.amazon.awssdk.services.s3.model.ExpirationStatus; import software.amazon.awssdk.services.s3.model.BucketLifecycleConfiguration; import software.amazon.awssdk.services.s3.model.PutBucketLifecycleConfigurationRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import java.util.ArrayList; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class LifecycleConfiguration { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <accountId>\s Where: bucketName - The Amazon Simple Storage Service (Amazon S3) bucket to upload an object into. accountId - The id of the account that owns the Amazon S3 bucket. """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String accountId = args[1]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); setLifecycleConfig(s3, bucketName, accountId); getLifecycleConfig(s3, bucketName, accountId); deleteLifecycleConfig(s3, bucketName, accountId); System.out.println("You have successfully created, updated, and deleted a Lifecycle configuration"); s3.close(); } public static void setLifecycleConfig(S3Client s3, String bucketName, String accountId) { try { // Create a rule to archive objects with the "glacierobjects/" prefix to Amazon // S3 Glacier. LifecycleRuleFilter ruleFilter = LifecycleRuleFilter.builder() .prefix("glacierobjects/") .build(); Transition transition = Transition.builder() .storageClass(TransitionStorageClass.GLACIER) .days(0) .build(); LifecycleRule rule1 = LifecycleRule.builder() .id("Archive immediately rule") .filter(ruleFilter) .transitions(transition) .status(ExpirationStatus.ENABLED) .build(); // Create a second rule. Transition transition2 = Transition.builder() .storageClass(TransitionStorageClass.GLACIER) .days(0) .build(); List<Transition> transitionList = new ArrayList<>(); transitionList.add(transition2); LifecycleRuleFilter ruleFilter2 = LifecycleRuleFilter.builder() .prefix("glacierobjects/") .build(); LifecycleRule rule2 = LifecycleRule.builder() .id("Archive and then delete rule") .filter(ruleFilter2) .transitions(transitionList) .status(ExpirationStatus.ENABLED) .build(); // Add the LifecycleRule objects to an ArrayList. ArrayList<LifecycleRule> ruleList = new ArrayList<>(); ruleList.add(rule1); ruleList.add(rule2); BucketLifecycleConfiguration lifecycleConfiguration = BucketLifecycleConfiguration.builder() .rules(ruleList) .build(); PutBucketLifecycleConfigurationRequest putBucketLifecycleConfigurationRequest = PutBucketLifecycleConfigurationRequest .builder() .bucket(bucketName) .lifecycleConfiguration(lifecycleConfiguration) .expectedBucketOwner(accountId) .build(); s3.putBucketLifecycleConfiguration(putBucketLifecycleConfigurationRequest); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } // Retrieve the configuration and add a new rule. public static void getLifecycleConfig(S3Client s3, String bucketName, String accountId) { try { GetBucketLifecycleConfigurationRequest getBucketLifecycleConfigurationRequest = GetBucketLifecycleConfigurationRequest .builder() .bucket(bucketName) .expectedBucketOwner(accountId) .build(); GetBucketLifecycleConfigurationResponse response = s3 .getBucketLifecycleConfiguration(getBucketLifecycleConfigurationRequest); List<LifecycleRule> newList = new ArrayList<>(); List<LifecycleRule> rules = response.rules(); for (LifecycleRule rule : rules) { newList.add(rule); } // Add a new rule with both a prefix predicate and a tag predicate. LifecycleRuleFilter ruleFilter = LifecycleRuleFilter.builder() .prefix("YearlyDocuments/") .build(); Transition transition = Transition.builder() .storageClass(TransitionStorageClass.GLACIER) .days(3650) .build(); LifecycleRule rule1 = LifecycleRule.builder() .id("NewRule") .filter(ruleFilter) .transitions(transition) .status(ExpirationStatus.ENABLED) .build(); // Add the new rule to the list. newList.add(rule1); BucketLifecycleConfiguration lifecycleConfiguration = BucketLifecycleConfiguration.builder() .rules(newList) .build(); PutBucketLifecycleConfigurationRequest putBucketLifecycleConfigurationRequest = PutBucketLifecycleConfigurationRequest .builder() .bucket(bucketName) .lifecycleConfiguration(lifecycleConfiguration) .expectedBucketOwner(accountId) .build(); s3.putBucketLifecycleConfiguration(putBucketLifecycleConfigurationRequest); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } // Delete the configuration from the Amazon S3 bucket. public static void deleteLifecycleConfig(S3Client s3, String bucketName, String accountId) { try { DeleteBucketLifecycleRequest deleteBucketLifecycleRequest = DeleteBucketLifecycleRequest .builder() .bucket(bucketName) .expectedBucketOwner(accountId) .build(); s3.deleteBucketLifecycle(deleteBucketLifecycleRequest); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }

The following code example shows how to use PutBucketNotificationConfiguration.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.Event; import software.amazon.awssdk.services.s3.model.NotificationConfiguration; import software.amazon.awssdk.services.s3.model.PutBucketNotificationConfigurationRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.model.TopicConfiguration; import java.util.ArrayList; import java.util.List; public class SetBucketEventBridgeNotification { public static void main(String[] args) { final String usage = """ Usage: <bucketName>\s Where: bucketName - The Amazon S3 bucket.\s topicArn - The Simple Notification Service topic ARN.\s id - An id value used for the topic configuration. This value is displayed in the AWS Management Console.\s """; if (args.length != 3) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String topicArn = args[1]; String id = args[2]; Region region = Region.US_EAST_1; S3Client s3Client = S3Client.builder() .region(region) .build(); setBucketNotification(s3Client, bucketName, topicArn, id); s3Client.close(); } public static void setBucketNotification(S3Client s3Client, String bucketName, String topicArn, String id) { try { List<Event> events = new ArrayList<>(); events.add(Event.S3_OBJECT_CREATED_PUT); TopicConfiguration config = TopicConfiguration.builder() .topicArn(topicArn) .events(events) .id(id) .build(); List<TopicConfiguration> topics = new ArrayList<>(); topics.add(config); NotificationConfiguration configuration = NotificationConfiguration.builder() .topicConfigurations(topics) .build(); PutBucketNotificationConfigurationRequest configurationRequest = PutBucketNotificationConfigurationRequest .builder() .bucket(bucketName) .notificationConfiguration(configuration) .skipDestinationValidation(true) .build(); // Set the bucket notification configuration. s3Client.putBucketNotificationConfiguration(configurationRequest); System.out.println("Added bucket " + bucketName + " with EventBridge events enabled."); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }

The following code example shows how to use PutBucketPolicy.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.PutBucketPolicyRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.regions.Region; import java.io.IOException; import java.nio.charset.StandardCharsets; import java.nio.file.Files; import java.nio.file.Paths; import java.util.List; import com.fasterxml.jackson.core.JsonParser; import com.fasterxml.jackson.databind.ObjectMapper; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class SetBucketPolicy { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <polFile> Where: bucketName - The Amazon S3 bucket to set the policy on. polFile - A JSON file containing the policy (see the Amazon S3 Readme for an example).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String polFile = args[1]; String policyText = getBucketPolicyFromFile(polFile); Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); setPolicy(s3, bucketName, policyText); s3.close(); } public static void setPolicy(S3Client s3, String bucketName, String policyText) { System.out.println("Setting policy:"); System.out.println("----"); System.out.println(policyText); System.out.println("----"); System.out.format("On Amazon S3 bucket: \"%s\"\n", bucketName); try { PutBucketPolicyRequest policyReq = PutBucketPolicyRequest.builder() .bucket(bucketName) .policy(policyText) .build(); s3.putBucketPolicy(policyReq); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } System.out.println("Done!"); } // Loads a JSON-formatted policy from a file public static String getBucketPolicyFromFile(String policyFile) { StringBuilder fileText = new StringBuilder(); try { List<String> lines = Files.readAllLines(Paths.get(policyFile), StandardCharsets.UTF_8); for (String line : lines) { fileText.append(line); } } catch (IOException e) { System.out.format("Problem reading file: \"%s\"", policyFile); System.out.println(e.getMessage()); } try { final JsonParser parser = new ObjectMapper().getFactory().createParser(fileText.toString()); while (parser.nextToken() != null) { } } catch (IOException jpe) { jpe.printStackTrace(); } return fileText.toString(); } }
  • For API details, see PutBucketPolicy in AWS SDK for Java 2.x API Reference.

The following code example shows how to use PutBucketWebsite.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.IndexDocument; import software.amazon.awssdk.services.s3.model.PutBucketWebsiteRequest; import software.amazon.awssdk.services.s3.model.WebsiteConfiguration; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.regions.Region; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class SetWebsiteConfiguration { public static void main(String[] args) { final String usage = """ Usage: <bucketName> [indexdoc]\s Where: bucketName - The Amazon S3 bucket to set the website configuration on.\s indexdoc - The index document, ex. 'index.html' If not specified, 'index.html' will be set. """; if (args.length != 1) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String indexDoc = "index.html"; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); setWebsiteConfig(s3, bucketName, indexDoc); s3.close(); } public static void setWebsiteConfig(S3Client s3, String bucketName, String indexDoc) { try { WebsiteConfiguration websiteConfig = WebsiteConfiguration.builder() .indexDocument(IndexDocument.builder().suffix(indexDoc).build()) .build(); PutBucketWebsiteRequest pubWebsiteReq = PutBucketWebsiteRequest.builder() .bucket(bucketName) .websiteConfiguration(websiteConfig) .build(); s3.putBucketWebsite(pubWebsiteReq); System.out.println("The call was successful"); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }

The following code example shows how to use PutObject.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Upload a file to a bucket using an S3Client.

import software.amazon.awssdk.core.sync.RequestBody; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.PutObjectRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import java.io.File; import java.util.HashMap; import java.util.Map; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class PutObject { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <objectKey> <objectPath>\s Where: bucketName - The Amazon S3 bucket to upload an object into. objectKey - The object to upload (for example, book.pdf). objectPath - The path where the file is located (for example, C:/AWS/book2.pdf).\s """; if (args.length != 3) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String objectKey = args[1]; String objectPath = args[2]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); putS3Object(s3, bucketName, objectKey, objectPath); s3.close(); } // This example uses RequestBody.fromFile to avoid loading the whole file into // memory. public static void putS3Object(S3Client s3, String bucketName, String objectKey, String objectPath) { try { Map<String, String> metadata = new HashMap<>(); metadata.put("x-amz-meta-myVal", "test"); PutObjectRequest putOb = PutObjectRequest.builder() .bucket(bucketName) .key(objectKey) .metadata(metadata) .build(); s3.putObject(putOb, RequestBody.fromFile(new File(objectPath))); System.out.println("Successfully placed " + objectKey + " into bucket " + bucketName); } catch (S3Exception e) { System.err.println(e.getMessage()); System.exit(1); } } }

Use an S3TransferManager to upload a file to a bucket. View the complete file and test.

import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.transfer.s3.S3TransferManager; import software.amazon.awssdk.transfer.s3.model.CompletedFileUpload; import software.amazon.awssdk.transfer.s3.model.FileUpload; import software.amazon.awssdk.transfer.s3.model.UploadFileRequest; import software.amazon.awssdk.transfer.s3.progress.LoggingTransferListener; import java.net.URI; import java.net.URISyntaxException; import java.net.URL; import java.nio.file.Paths; import java.util.UUID; public String uploadFile(S3TransferManager transferManager, String bucketName, String key, URI filePathURI) { UploadFileRequest uploadFileRequest = UploadFileRequest.builder() .putObjectRequest(b -> b.bucket(bucketName).key(key)) .source(Paths.get(filePathURI)) .build(); FileUpload fileUpload = transferManager.uploadFile(uploadFileRequest); CompletedFileUpload uploadResult = fileUpload.completionFuture().join(); return uploadResult.response().eTag(); }

Upload an object to a bucket and set tags using an S3Client.

public static void putS3ObjectTags(S3Client s3, String bucketName, String objectKey, String objectPath) { try { Tag tag1 = Tag.builder() .key("Tag 1") .value("This is tag 1") .build(); Tag tag2 = Tag.builder() .key("Tag 2") .value("This is tag 2") .build(); List<Tag> tags = new ArrayList<>(); tags.add(tag1); tags.add(tag2); Tagging allTags = Tagging.builder() .tagSet(tags) .build(); PutObjectRequest putOb = PutObjectRequest.builder() .bucket(bucketName) .key(objectKey) .tagging(allTags) .build(); s3.putObject(putOb, RequestBody.fromBytes(getObjectFile(objectPath))); } catch (S3Exception e) { System.err.println(e.getMessage()); System.exit(1); } } public static void updateObjectTags(S3Client s3, String bucketName, String objectKey) { try { GetObjectTaggingRequest taggingRequest = GetObjectTaggingRequest.builder() .bucket(bucketName) .key(objectKey) .build(); GetObjectTaggingResponse getTaggingRes = s3.getObjectTagging(taggingRequest); List<Tag> obTags = getTaggingRes.tagSet(); for (Tag sinTag : obTags) { System.out.println("The tag key is: " + sinTag.key()); System.out.println("The tag value is: " + sinTag.value()); } // Replace the object's tags with two new tags. Tag tag3 = Tag.builder() .key("Tag 3") .value("This is tag 3") .build(); Tag tag4 = Tag.builder() .key("Tag 4") .value("This is tag 4") .build(); List<Tag> tags = new ArrayList<>(); tags.add(tag3); tags.add(tag4); Tagging updatedTags = Tagging.builder() .tagSet(tags) .build(); PutObjectTaggingRequest taggingRequest1 = PutObjectTaggingRequest.builder() .bucket(bucketName) .key(objectKey) .tagging(updatedTags) .build(); s3.putObjectTagging(taggingRequest1); GetObjectTaggingResponse getTaggingRes2 = s3.getObjectTagging(taggingRequest); List<Tag> modTags = getTaggingRes2.tagSet(); for (Tag sinTag : modTags) { System.out.println("The tag key is: " + sinTag.key()); System.out.println("The tag value is: " + sinTag.value()); } } catch (S3Exception e) { System.err.println(e.getMessage()); System.exit(1); } } // Return a byte array. private static byte[] getObjectFile(String filePath) { FileInputStream fileInputStream = null; byte[] bytesArray = null; try { File file = new File(filePath); bytesArray = new byte[(int) file.length()]; fileInputStream = new FileInputStream(file); fileInputStream.read(bytesArray); } catch (IOException e) { e.printStackTrace(); } finally { if (fileInputStream != null) { try { fileInputStream.close(); } catch (IOException e) { e.printStackTrace(); } } } return bytesArray; } }

Upload an object to a bucket and set metadata using an S3Client.

import software.amazon.awssdk.core.sync.RequestBody; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.PutObjectRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import java.io.File; import java.util.HashMap; import java.util.Map; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class PutObjectMetadata { public static void main(String[] args) { final String USAGE = """ Usage: <bucketName> <objectKey> <objectPath>\s Where: bucketName - The Amazon S3 bucket to upload an object into. objectKey - The object to upload (for example, book.pdf). objectPath - The path where the file is located (for example, C:/AWS/book2.pdf).\s """; if (args.length != 3) { System.out.println(USAGE); System.exit(1); } String bucketName = args[0]; String objectKey = args[1]; String objectPath = args[2]; System.out.println("Putting object " + objectKey + " into bucket " + bucketName); System.out.println(" in bucket: " + bucketName); Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); putS3Object(s3, bucketName, objectKey, objectPath); s3.close(); } // This example uses RequestBody.fromFile to avoid loading the whole file into // memory. public static void putS3Object(S3Client s3, String bucketName, String objectKey, String objectPath) { try { Map<String, String> metadata = new HashMap<>(); metadata.put("author", "Mary Doe"); metadata.put("version", "1.0.0.0"); PutObjectRequest putOb = PutObjectRequest.builder() .bucket(bucketName) .key(objectKey) .metadata(metadata) .build(); s3.putObject(putOb, RequestBody.fromFile(new File(objectPath))); System.out.println("Successfully placed " + objectKey + " into bucket " + bucketName); } catch (S3Exception e) { System.err.println(e.getMessage()); System.exit(1); } } }

Upload an object to a bucket and set an object retention value using an S3Client.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.PutObjectRetentionRequest; import software.amazon.awssdk.services.s3.model.ObjectLockRetention; import software.amazon.awssdk.services.s3.model.S3Exception; import java.time.Instant; import java.time.LocalDate; import java.time.LocalDateTime; import java.time.ZoneOffset; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class PutObjectRetention { public static void main(String[] args) { final String usage = """ Usage: <key> <bucketName>\s Where: key - The name of the object (for example, book.pdf).\s bucketName - The Amazon S3 bucket name that contains the object (for example, bucket1).\s """; if (args.length != 2) { System.out.println(usage); System.exit(1); } String key = args[0]; String bucketName = args[1]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); setRentionPeriod(s3, key, bucketName); s3.close(); } public static void setRentionPeriod(S3Client s3, String key, String bucket) { try { LocalDate localDate = LocalDate.parse("2020-07-17"); LocalDateTime localDateTime = localDate.atStartOfDay(); Instant instant = localDateTime.toInstant(ZoneOffset.UTC); ObjectLockRetention lockRetention = ObjectLockRetention.builder() .mode("COMPLIANCE") .retainUntilDate(instant) .build(); PutObjectRetentionRequest retentionRequest = PutObjectRetentionRequest.builder() .bucket(bucket) .key(key) .bypassGovernanceRetention(true) .retention(lockRetention) .build(); // To set Retention on an object, the Amazon S3 bucket must support object // locking, otherwise an exception is thrown. s3.putObjectRetention(retentionRequest); System.out.print("An object retention configuration was successfully placed on the object"); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }
  • For API details, see PutObject in AWS SDK for Java 2.x API Reference.

The following code example shows how to use PutObjectLegalHold.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

// Set or modify a legal hold on an object in an S3 bucket. public void modifyObjectLegalHold(String bucketName, String objectKey, boolean legalHoldOn) { ObjectLockLegalHold legalHold ; if (legalHoldOn) { legalHold = ObjectLockLegalHold.builder() .status(ObjectLockLegalHoldStatus.ON) .build(); } else { legalHold = ObjectLockLegalHold.builder() .status(ObjectLockLegalHoldStatus.OFF) .build(); } PutObjectLegalHoldRequest legalHoldRequest = PutObjectLegalHoldRequest.builder() .bucket(bucketName) .key(objectKey) .legalHold(legalHold) .build(); getClient().putObjectLegalHold(legalHoldRequest) ; System.out.println("Modified legal hold for "+ objectKey +" in "+bucketName +"."); }

The following code example shows how to use PutObjectLockConfiguration.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Set the object lock configuration of a bucket.

// Enable object lock on an existing bucket. public void enableObjectLockOnBucket(String bucketName) { try { VersioningConfiguration versioningConfiguration = VersioningConfiguration.builder() .status(BucketVersioningStatus.ENABLED) .build(); PutBucketVersioningRequest putBucketVersioningRequest = PutBucketVersioningRequest.builder() .bucket(bucketName) .versioningConfiguration(versioningConfiguration) .build(); // Enable versioning on the bucket. getClient().putBucketVersioning(putBucketVersioningRequest); PutObjectLockConfigurationRequest request = PutObjectLockConfigurationRequest.builder() .bucket(bucketName) .objectLockConfiguration(ObjectLockConfiguration.builder() .objectLockEnabled(ObjectLockEnabled.ENABLED) .build()) .build(); getClient().putObjectLockConfiguration(request); System.out.println("Successfully enabled object lock on "+bucketName); } catch (S3Exception ex) { System.out.println("Error modifying object lock: '" + ex.getMessage() + "'"); } }

Set the default retention period of a bucket.

// Set or modify a retention period on an S3 bucket. public void modifyBucketDefaultRetention(String bucketName) { VersioningConfiguration versioningConfiguration = VersioningConfiguration.builder() .mfaDelete(MFADelete.DISABLED) .status(BucketVersioningStatus.ENABLED) .build(); PutBucketVersioningRequest versioningRequest = PutBucketVersioningRequest.builder() .bucket(bucketName) .versioningConfiguration(versioningConfiguration) .build(); getClient().putBucketVersioning(versioningRequest); DefaultRetention rention = DefaultRetention.builder() .days(1) .mode(ObjectLockRetentionMode.GOVERNANCE) .build(); ObjectLockRule lockRule = ObjectLockRule.builder() .defaultRetention(rention) .build(); ObjectLockConfiguration objectLockConfiguration = ObjectLockConfiguration.builder() .objectLockEnabled(ObjectLockEnabled.ENABLED) .rule(lockRule) .build(); PutObjectLockConfigurationRequest putObjectLockConfigurationRequest = PutObjectLockConfigurationRequest.builder() .bucket(bucketName) .objectLockConfiguration(objectLockConfiguration) .build(); getClient().putObjectLockConfiguration(putObjectLockConfigurationRequest) ; System.out.println("Added a default retention to bucket "+bucketName +"."); }

The following code example shows how to use PutObjectRetention.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

// Set or modify a retention period on an object in an S3 bucket. public void modifyObjectRetentionPeriod(String bucketName, String objectKey) { // Calculate the instant one day from now. Instant futureInstant = Instant.now().plus(1, ChronoUnit.DAYS); // Convert the Instant to a ZonedDateTime object with a specific time zone. ZonedDateTime zonedDateTime = futureInstant.atZone(ZoneId.systemDefault()); // Define a formatter for human-readable output. DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss"); // Format the ZonedDateTime object to a human-readable date string. String humanReadableDate = formatter.format(zonedDateTime); // Print the formatted date string. System.out.println("Formatted Date: " + humanReadableDate); ObjectLockRetention retention = ObjectLockRetention.builder() .mode(ObjectLockRetentionMode.GOVERNANCE) .retainUntilDate(futureInstant) .build(); PutObjectRetentionRequest retentionRequest = PutObjectRetentionRequest.builder() .bucket(bucketName) .key(objectKey) .retention(retention) .build(); getClient().putObjectRetention(retentionRequest); System.out.println("Set retention for "+objectKey +" in " +bucketName +" until "+ humanReadableDate +"."); }

The following code example shows how to use RestoreObject.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.RestoreRequest; import software.amazon.awssdk.services.s3.model.GlacierJobParameters; import software.amazon.awssdk.services.s3.model.RestoreObjectRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.model.Tier; /* * For more information about restoring an object, see "Restoring an archived object" at * https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html * * Before running this Java V2 code example, set up your development environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class RestoreObject { public static void main(String[] args) { final String usage = """ Usage: <bucketName> <keyName> <expectedBucketOwner> Where: bucketName - The Amazon S3 bucket name.\s keyName - The key name of an object with a Storage class value of Glacier.\s expectedBucketOwner - The account that owns the bucket (you can obtain this value from the AWS Management Console).\s """; if (args.length != 3) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String keyName = args[1]; String expectedBucketOwner = args[2]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); restoreS3Object(s3, bucketName, keyName, expectedBucketOwner); s3.close(); } public static void restoreS3Object(S3Client s3, String bucketName, String keyName, String expectedBucketOwner) { try { RestoreRequest restoreRequest = RestoreRequest.builder() .days(10) .glacierJobParameters(GlacierJobParameters.builder().tier(Tier.STANDARD).build()) .build(); RestoreObjectRequest objectRequest = RestoreObjectRequest.builder() .expectedBucketOwner(expectedBucketOwner) .bucket(bucketName) .key(keyName) .restoreRequest(restoreRequest) .build(); s3.restoreObject(objectRequest); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } }
  • For API details, see RestoreObject in AWS SDK for Java 2.x API Reference.

The following code example shows how to use SelectObjectContent.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

The following example shows a query using a JSON object. The complete example also shows the use of a CSV object.

import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.core.async.AsyncRequestBody; import software.amazon.awssdk.core.async.BlockingInputStreamAsyncRequestBody; import software.amazon.awssdk.core.exception.SdkException; import software.amazon.awssdk.services.s3.S3AsyncClient; import software.amazon.awssdk.services.s3.model.CSVInput; import software.amazon.awssdk.services.s3.model.CSVOutput; import software.amazon.awssdk.services.s3.model.CompressionType; import software.amazon.awssdk.services.s3.model.ExpressionType; import software.amazon.awssdk.services.s3.model.FileHeaderInfo; import software.amazon.awssdk.services.s3.model.InputSerialization; import software.amazon.awssdk.services.s3.model.JSONInput; import software.amazon.awssdk.services.s3.model.JSONOutput; import software.amazon.awssdk.services.s3.model.JSONType; import software.amazon.awssdk.services.s3.model.ObjectIdentifier; import software.amazon.awssdk.services.s3.model.OutputSerialization; import software.amazon.awssdk.services.s3.model.Progress; import software.amazon.awssdk.services.s3.model.PutObjectResponse; import software.amazon.awssdk.services.s3.model.SelectObjectContentRequest; import software.amazon.awssdk.services.s3.model.SelectObjectContentResponseHandler; import software.amazon.awssdk.services.s3.model.Stats; import java.io.IOException; import java.net.URL; import java.util.ArrayList; import java.util.List; import java.util.UUID; import java.util.concurrent.CompletableFuture; public class SelectObjectContentExample { static final Logger logger = LoggerFactory.getLogger(SelectObjectContentExample.class); static final String BUCKET_NAME = "select-object-content-" + UUID.randomUUID(); static final S3AsyncClient s3AsyncClient = S3AsyncClient.create(); static String FILE_CSV = "csv"; static String FILE_JSON = "json"; static String URL_CSV = "https://raw.githubusercontent.com/mledoze/countries/master/dist/countries.csv"; static String URL_JSON = "https://raw.githubusercontent.com/mledoze/countries/master/dist/countries.json"; public static void main(String[] args) { SelectObjectContentExample selectObjectContentExample = new SelectObjectContentExample(); try { SelectObjectContentExample.setUp(); selectObjectContentExample.runSelectObjectContentMethodForJSON(); selectObjectContentExample.runSelectObjectContentMethodForCSV(); } catch (SdkException e) { logger.error(e.getMessage(), e); System.exit(1); } finally { SelectObjectContentExample.tearDown(); } } EventStreamInfo runSelectObjectContentMethodForJSON() { // Set up request parameters. final String queryExpression = "select * from s3object[*][*] c where c.area < 350000"; final String fileType = FILE_JSON; InputSerialization inputSerialization = InputSerialization.builder() .json(JSONInput.builder().type(JSONType.DOCUMENT).build()) .compressionType(CompressionType.NONE) .build(); OutputSerialization outputSerialization = OutputSerialization.builder() .json(JSONOutput.builder().recordDelimiter(null).build()) .build(); // Build the SelectObjectContentRequest. SelectObjectContentRequest select = SelectObjectContentRequest.builder() .bucket(BUCKET_NAME) .key(FILE_JSON) .expression(queryExpression) .expressionType(ExpressionType.SQL) .inputSerialization(inputSerialization) .outputSerialization(outputSerialization) .build(); EventStreamInfo eventStreamInfo = new EventStreamInfo(); // Call the selectObjectContent method with the request and a response handler. // Supply an EventStreamInfo object to the response handler to gather records and information from the response. s3AsyncClient.selectObjectContent(select, buildResponseHandler(eventStreamInfo)).join(); // Log out information gathered while processing the response stream. long recordCount = eventStreamInfo.getRecords().stream().mapToInt(record -> record.split("\n").length ).sum(); logger.info("Total records {}: {}", fileType, recordCount); logger.info("Visitor onRecords for fileType {} called {} times", fileType, eventStreamInfo.getCountOnRecordsCalled()); logger.info("Visitor onStats for fileType {}, {}", fileType, eventStreamInfo.getStats()); logger.info("Visitor onContinuations for fileType {}, {}", fileType, eventStreamInfo.getCountContinuationEvents()); return eventStreamInfo; } static SelectObjectContentResponseHandler buildResponseHandler(EventStreamInfo eventStreamInfo) { // Use a Visitor to process the response stream. This visitor logs information and gathers details while processing. final SelectObjectContentResponseHandler.Visitor visitor = SelectObjectContentResponseHandler.Visitor.builder() .onRecords(r -> { logger.info("Record event received."); eventStreamInfo.addRecord(r.payload().asUtf8String()); eventStreamInfo.incrementOnRecordsCalled(); }) .onCont(ce -> { logger.info("Continuation event received."); eventStreamInfo.incrementContinuationEvents(); }) .onProgress(pe -> { Progress progress = pe.details(); logger.info("Progress event received:\n bytesScanned:{}\nbytesProcessed: {}\nbytesReturned:{}", progress.bytesScanned(), progress.bytesProcessed(), progress.bytesReturned()); }) .onEnd(ee -> logger.info("End event received.")) .onStats(se -> { logger.info("Stats event received."); eventStreamInfo.addStats(se.details()); }) .build(); // Build the SelectObjectContentResponseHandler with the visitor that processes the stream. return SelectObjectContentResponseHandler.builder() .subscriber(visitor).build(); } // The EventStreamInfo class is used to store information gathered while processing the response stream. static class EventStreamInfo { private final List<String> records = new ArrayList<>(); private Integer countOnRecordsCalled = 0; private Integer countContinuationEvents = 0; private Stats stats; void incrementOnRecordsCalled() { countOnRecordsCalled++; } void incrementContinuationEvents() { countContinuationEvents++; } void addRecord(String record) { records.add(record); } void addStats(Stats stats) { this.stats = stats; } public List<String> getRecords() { return records; } public Integer getCountOnRecordsCalled() { return countOnRecordsCalled; } public Integer getCountContinuationEvents() { return countContinuationEvents; } public Stats getStats() { return stats; } }

Scenarios

The following code example shows how to create a presigned URL for Amazon S3 and upload an object.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Generate a pre-signed URL for an object, then download it (GET request).

Imports.

import com.example.s3.util.PresignUrlUtils; import org.slf4j.Logger; import software.amazon.awssdk.http.HttpExecuteRequest; import software.amazon.awssdk.http.HttpExecuteResponse; import software.amazon.awssdk.http.SdkHttpClient; import software.amazon.awssdk.http.SdkHttpMethod; import software.amazon.awssdk.http.SdkHttpRequest; import software.amazon.awssdk.http.apache.ApacheHttpClient; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.GetObjectRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.presigner.S3Presigner; import software.amazon.awssdk.services.s3.presigner.model.GetObjectPresignRequest; import software.amazon.awssdk.services.s3.presigner.model.PresignedGetObjectRequest; import software.amazon.awssdk.utils.IoUtils; import java.io.ByteArrayOutputStream; import java.io.File; import java.io.IOException; import java.io.InputStream; import java.net.HttpURLConnection; import java.net.URISyntaxException; import java.net.URL; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; import java.nio.file.Paths; import java.time.Duration; import java.util.UUID;

Generate the URL.

/* Create a pre-signed URL to download an object in a subsequent GET request. */ public String createPresignedGetUrl(String bucketName, String keyName) { try (S3Presigner presigner = S3Presigner.create()) { GetObjectRequest objectRequest = GetObjectRequest.builder() .bucket(bucketName) .key(keyName) .build(); GetObjectPresignRequest presignRequest = GetObjectPresignRequest.builder() .signatureDuration(Duration.ofMinutes(10)) // The URL will expire in 10 minutes. .getObjectRequest(objectRequest) .build(); PresignedGetObjectRequest presignedRequest = presigner.presignGetObject(presignRequest); logger.info("Presigned URL: [{}]", presignedRequest.url().toString()); logger.info("HTTP method: [{}]", presignedRequest.httpRequest().method()); return presignedRequest.url().toExternalForm(); } }

Download the object by using any one of the following three approaches.

Use JDK HttpURLConnection (since v1.1) class to do the download.

/* Use the JDK HttpURLConnection (since v1.1) class to do the download. */ public byte[] useHttpUrlConnectionToGet(String presignedUrlString) { ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(); // Capture the response body to a byte array. try { URL presignedUrl = new URL(presignedUrlString); HttpURLConnection connection = (HttpURLConnection) presignedUrl.openConnection(); connection.setRequestMethod("GET"); // Download the result of executing the request. try (InputStream content = connection.getInputStream()) { IoUtils.copy(content, byteArrayOutputStream); } logger.info("HTTP response code is " + connection.getResponseCode()); } catch (S3Exception | IOException e) { logger.error(e.getMessage(), e); } return byteArrayOutputStream.toByteArray(); }

Use JDK HttpClient (since v11) class to do the download.

/* Use the JDK HttpClient (since v11) class to do the download. */ public byte[] useHttpClientToGet(String presignedUrlString) { ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(); // Capture the response body to a byte array. HttpRequest.Builder requestBuilder = HttpRequest.newBuilder(); HttpClient httpClient = HttpClient.newHttpClient(); try { URL presignedUrl = new URL(presignedUrlString); HttpResponse<InputStream> response = httpClient.send(requestBuilder .uri(presignedUrl.toURI()) .GET() .build(), HttpResponse.BodyHandlers.ofInputStream()); IoUtils.copy(response.body(), byteArrayOutputStream); logger.info("HTTP response code is " + response.statusCode()); } catch (URISyntaxException | InterruptedException | IOException e) { logger.error(e.getMessage(), e); } return byteArrayOutputStream.toByteArray(); }

Use the AWS SDK for Java SdkHttpClient class to do the download.

/* Use the AWS SDK for Java SdkHttpClient class to do the download. */ public byte[] useSdkHttpClientToPut(String presignedUrlString) { ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(); // Capture the response body to a byte array. try { URL presignedUrl = new URL(presignedUrlString); SdkHttpRequest request = SdkHttpRequest.builder() .method(SdkHttpMethod.GET) .uri(presignedUrl.toURI()) .build(); HttpExecuteRequest executeRequest = HttpExecuteRequest.builder() .request(request) .build(); try (SdkHttpClient sdkHttpClient = ApacheHttpClient.create()) { HttpExecuteResponse response = sdkHttpClient.prepareRequest(executeRequest).call(); response.responseBody().ifPresentOrElse( abortableInputStream -> { try { IoUtils.copy(abortableInputStream, byteArrayOutputStream); } catch (IOException e) { throw new RuntimeException(e); } }, () -> logger.error("No response body.")); logger.info("HTTP Response code is {}", response.httpResponse().statusCode()); } } catch (URISyntaxException | IOException e) { logger.error(e.getMessage(), e); } return byteArrayOutputStream.toByteArray(); }

Generate a pre-signed URL for an upload, then upload a file (PUT request).

Imports.

import com.example.s3.util.PresignUrlUtils; import org.slf4j.Logger; import software.amazon.awssdk.core.internal.sync.FileContentStreamProvider; import software.amazon.awssdk.http.HttpExecuteRequest; import software.amazon.awssdk.http.HttpExecuteResponse; import software.amazon.awssdk.http.SdkHttpClient; import software.amazon.awssdk.http.SdkHttpMethod; import software.amazon.awssdk.http.SdkHttpRequest; import software.amazon.awssdk.http.apache.ApacheHttpClient; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.PutObjectRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.presigner.S3Presigner; import software.amazon.awssdk.services.s3.presigner.model.PresignedPutObjectRequest; import software.amazon.awssdk.services.s3.presigner.model.PutObjectPresignRequest; import java.io.File; import java.io.IOException; import java.io.OutputStream; import java.io.RandomAccessFile; import java.net.HttpURLConnection; import java.net.URISyntaxException; import java.net.URL; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import java.nio.file.Path; import java.nio.file.Paths; import java.time.Duration; import java.util.Map; import java.util.UUID;

Generate the URL.

/* Create a presigned URL to use in a subsequent PUT request */ public String createPresignedUrl(String bucketName, String keyName, Map<String, String> metadata) { try (S3Presigner presigner = S3Presigner.create()) { PutObjectRequest objectRequest = PutObjectRequest.builder() .bucket(bucketName) .key(keyName) .metadata(metadata) .build(); PutObjectPresignRequest presignRequest = PutObjectPresignRequest.builder() .signatureDuration(Duration.ofMinutes(10)) // The URL expires in 10 minutes. .putObjectRequest(objectRequest) .build(); PresignedPutObjectRequest presignedRequest = presigner.presignPutObject(presignRequest); String myURL = presignedRequest.url().toString(); logger.info("Presigned URL to upload a file to: [{}]", myURL); logger.info("HTTP method: [{}]", presignedRequest.httpRequest().method()); return presignedRequest.url().toExternalForm(); } }

Upload a file object by using any one of the following three approaches.

Use the JDK HttpURLConnection (since v1.1) class to do the upload.

/* Use the JDK HttpURLConnection (since v1.1) class to do the upload. */ public void useHttpUrlConnectionToPut(String presignedUrlString, File fileToPut, Map<String, String> metadata) { logger.info("Begin [{}] upload", fileToPut.toString()); try { URL presignedUrl = new URL(presignedUrlString); HttpURLConnection connection = (HttpURLConnection) presignedUrl.openConnection(); connection.setDoOutput(true); metadata.forEach((k, v) -> connection.setRequestProperty("x-amz-meta-" + k, v)); connection.setRequestMethod("PUT"); OutputStream out = connection.getOutputStream(); try (RandomAccessFile file = new RandomAccessFile(fileToPut, "r"); FileChannel inChannel = file.getChannel()) { ByteBuffer buffer = ByteBuffer.allocate(8192); //Buffer size is 8k while (inChannel.read(buffer) > 0) { buffer.flip(); for (int i = 0; i < buffer.limit(); i++) { out.write(buffer.get()); } buffer.clear(); } } catch (IOException e) { logger.error(e.getMessage(), e); } out.close(); connection.getResponseCode(); logger.info("HTTP response code is " + connection.getResponseCode()); } catch (S3Exception | IOException e) { logger.error(e.getMessage(), e); } }

Use the JDK HttpClient (since v11) class to do the upload.

/* Use the JDK HttpClient (since v11) class to do the upload. */ public void useHttpClientToPut(String presignedUrlString, File fileToPut, Map<String, String> metadata) { logger.info("Begin [{}] upload", fileToPut.toString()); HttpRequest.Builder requestBuilder = HttpRequest.newBuilder(); metadata.forEach((k, v) -> requestBuilder.header("x-amz-meta-" + k, v)); HttpClient httpClient = HttpClient.newHttpClient(); try { final HttpResponse<Void> response = httpClient.send(requestBuilder .uri(new URL(presignedUrlString).toURI()) .PUT(HttpRequest.BodyPublishers.ofFile(Path.of(fileToPut.toURI()))) .build(), HttpResponse.BodyHandlers.discarding()); logger.info("HTTP response code is " + response.statusCode()); } catch (URISyntaxException | InterruptedException | IOException e) { logger.error(e.getMessage(), e); } }

Use the AWS for Java V2 SdkHttpClient class to do the upload.

/* Use the AWS SDK for Java V2 SdkHttpClient class to do the upload. */ public void useSdkHttpClientToPut(String presignedUrlString, File fileToPut, Map<String, String> metadata) { logger.info("Begin [{}] upload", fileToPut.toString()); try { URL presignedUrl = new URL(presignedUrlString); SdkHttpRequest.Builder requestBuilder = SdkHttpRequest.builder() .method(SdkHttpMethod.PUT) .uri(presignedUrl.toURI()); // Add headers metadata.forEach((k, v) -> requestBuilder.putHeader("x-amz-meta-" + k, v)); // Finish building the request. SdkHttpRequest request = requestBuilder.build(); HttpExecuteRequest executeRequest = HttpExecuteRequest.builder() .request(request) .contentStreamProvider(new FileContentStreamProvider(fileToPut.toPath())) .build(); try (SdkHttpClient sdkHttpClient = ApacheHttpClient.create()) { HttpExecuteResponse response = sdkHttpClient.prepareRequest(executeRequest).call(); logger.info("Response code: {}", response.httpResponse().statusCode()); } } catch (URISyntaxException | IOException e) { logger.error(e.getMessage(), e); } }

The following code example shows how to download all objects in an Amazon Simple Storage Service (Amazon S3) bucket to a local directory.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Use an S3TransferManager to download all S3 objects in the same S3 bucket. View the complete file and test.

import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.core.sync.RequestBody; import software.amazon.awssdk.services.s3.model.ObjectIdentifier; import software.amazon.awssdk.transfer.s3.S3TransferManager; import software.amazon.awssdk.transfer.s3.model.CompletedDirectoryDownload; import software.amazon.awssdk.transfer.s3.model.DirectoryDownload; import software.amazon.awssdk.transfer.s3.model.DownloadDirectoryRequest; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.Paths; import java.util.HashSet; import java.util.Set; import java.util.UUID; import java.util.stream.Collectors; public Integer downloadObjectsToDirectory(S3TransferManager transferManager, URI destinationPathURI, String bucketName) { DirectoryDownload directoryDownload = transferManager.downloadDirectory(DownloadDirectoryRequest.builder() .destination(Paths.get(destinationPathURI)) .bucket(bucketName) .build()); CompletedDirectoryDownload completedDirectoryDownload = directoryDownload.completionFuture().join(); completedDirectoryDownload.failedTransfers() .forEach(fail -> logger.warn("Object [{}] failed to transfer", fail.toString())); return completedDirectoryDownload.failedTransfers().size(); }

The following code example shows how to:

  • Create a bucket and upload a file to it.

  • Download an object from a bucket.

  • Copy an object to a subfolder in a bucket.

  • List the objects in a bucket.

  • Delete the bucket objects and the bucket.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

/** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html * * This Java code example performs the following tasks: * * 1. Creates an Amazon S3 bucket. * 2. Uploads an object to the bucket. * 3. Downloads the object to another local file. * 4. Uploads an object using multipart upload. * 5. List all objects located in the Amazon S3 bucket. * 6. Copies the object to another Amazon S3 bucket. * 7. Deletes the object from the Amazon S3 bucket. * 8. Deletes the Amazon S3 bucket. */ public class S3Scenario { public static final String DASHES = new String(new char[80]).replace("\0", "-"); public static void main(String[] args) throws IOException { final String usage = """ Usage: <bucketName> <key> <objectPath> <savePath> <toBucket> Where: bucketName - The Amazon S3 bucket to create. key - The key to use. objectPath - The path where the file is located (for example, C:/AWS/book2.pdf). savePath - The path where the file is saved after it's downloaded (for example, C:/AWS/book2.pdf). toBucket - An Amazon S3 bucket to where an object is copied to (for example, C:/AWS/book2.pdf).\s """; if (args.length != 5) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String key = args[1]; String objectPath = args[2]; String savePath = args[3]; String toBucket = args[4]; Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .build(); System.out.println(DASHES); System.out.println("Welcome to the Amazon S3 example scenario."); System.out.println(DASHES); System.out.println(DASHES); System.out.println("1. Create an Amazon S3 bucket."); createBucket(s3, bucketName); System.out.println(DASHES); System.out.println(DASHES); System.out.println("2. Update a local file to the Amazon S3 bucket."); uploadLocalFile(s3, bucketName, key, objectPath); System.out.println(DASHES); System.out.println(DASHES); System.out.println("3. Download the object to another local file."); getObjectBytes(s3, bucketName, key, savePath); System.out.println(DASHES); System.out.println(DASHES); System.out.println("4. Perform a multipart upload."); String multipartKey = "multiPartKey"; multipartUpload(s3, toBucket, multipartKey); System.out.println(DASHES); System.out.println(DASHES); System.out.println("5. List all objects located in the Amazon S3 bucket."); listAllObjects(s3, bucketName); anotherListExample(s3, bucketName); System.out.println(DASHES); System.out.println(DASHES); System.out.println("6. Copy the object to another Amazon S3 bucket."); copyBucketObject(s3, bucketName, key, toBucket); System.out.println(DASHES); System.out.println(DASHES); System.out.println("7. Delete the object from the Amazon S3 bucket."); deleteObjectFromBucket(s3, bucketName, key); System.out.println(DASHES); System.out.println(DASHES); System.out.println("8. Delete the Amazon S3 bucket."); deleteBucket(s3, bucketName); System.out.println(DASHES); System.out.println(DASHES); System.out.println("All Amazon S3 operations were successfully performed"); System.out.println(DASHES); s3.close(); } // Create a bucket by using a S3Waiter object. public static void createBucket(S3Client s3Client, String bucketName) { try { S3Waiter s3Waiter = s3Client.waiter(); CreateBucketRequest bucketRequest = CreateBucketRequest.builder() .bucket(bucketName) .build(); s3Client.createBucket(bucketRequest); HeadBucketRequest bucketRequestWait = HeadBucketRequest.builder() .bucket(bucketName) .build(); // Wait until the bucket is created and print out the response. WaiterResponse<HeadBucketResponse> waiterResponse = s3Waiter.waitUntilBucketExists(bucketRequestWait); waiterResponse.matched().response().ifPresent(System.out::println); System.out.println(bucketName + " is ready"); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void deleteBucket(S3Client client, String bucket) { DeleteBucketRequest deleteBucketRequest = DeleteBucketRequest.builder() .bucket(bucket) .build(); client.deleteBucket(deleteBucketRequest); System.out.println(bucket + " was deleted."); } /** * Upload an object in parts. */ public static void multipartUpload(S3Client s3, String bucketName, String key) { int mB = 1024 * 1024; // First create a multipart upload and get the upload id. CreateMultipartUploadRequest createMultipartUploadRequest = CreateMultipartUploadRequest.builder() .bucket(bucketName) .key(key) .build(); CreateMultipartUploadResponse response = s3.createMultipartUpload(createMultipartUploadRequest); String uploadId = response.uploadId(); System.out.println(uploadId); // Upload all the different parts of the object. UploadPartRequest uploadPartRequest1 = UploadPartRequest.builder() .bucket(bucketName) .key(key) .uploadId(uploadId) .partNumber(1).build(); String etag1 = s3.uploadPart(uploadPartRequest1, RequestBody.fromByteBuffer(getRandomByteBuffer(5 * mB))) .eTag(); CompletedPart part1 = CompletedPart.builder().partNumber(1).eTag(etag1).build(); UploadPartRequest uploadPartRequest2 = UploadPartRequest.builder().bucket(bucketName).key(key) .uploadId(uploadId) .partNumber(2).build(); String etag2 = s3.uploadPart(uploadPartRequest2, RequestBody.fromByteBuffer(getRandomByteBuffer(3 * mB))) .eTag(); CompletedPart part2 = CompletedPart.builder().partNumber(2).eTag(etag2).build(); // Call completeMultipartUpload operation to tell S3 to merge all uploaded // parts and finish the multipart operation. CompletedMultipartUpload completedMultipartUpload = CompletedMultipartUpload.builder() .parts(part1, part2) .build(); CompleteMultipartUploadRequest completeMultipartUploadRequest = CompleteMultipartUploadRequest.builder() .bucket(bucketName) .key(key) .uploadId(uploadId) .multipartUpload(completedMultipartUpload) .build(); s3.completeMultipartUpload(completeMultipartUploadRequest); } private static ByteBuffer getRandomByteBuffer(int size) { byte[] b = new byte[size]; new Random().nextBytes(b); return ByteBuffer.wrap(b); } public static void getObjectBytes(S3Client s3, String bucketName, String keyName, String path) { try { GetObjectRequest objectRequest = GetObjectRequest .builder() .key(keyName) .bucket(bucketName) .build(); ResponseBytes<GetObjectResponse> objectBytes = s3.getObjectAsBytes(objectRequest); byte[] data = objectBytes.asByteArray(); // Write the data to a local file. File myFile = new File(path); OutputStream os = new FileOutputStream(myFile); os.write(data); System.out.println("Successfully obtained bytes from an S3 object"); os.close(); } catch (IOException ex) { ex.printStackTrace(); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void uploadLocalFile(S3Client s3, String bucketName, String key, String objectPath) { PutObjectRequest objectRequest = PutObjectRequest.builder() .bucket(bucketName) .key(key) .build(); s3.putObject(objectRequest, RequestBody.fromFile(new File(objectPath))); } public static void listAllObjects(S3Client s3, String bucketName) { ListObjectsV2Request listObjectsReqManual = ListObjectsV2Request.builder() .bucket(bucketName) .maxKeys(1) .build(); boolean done = false; while (!done) { ListObjectsV2Response listObjResponse = s3.listObjectsV2(listObjectsReqManual); for (S3Object content : listObjResponse.contents()) { System.out.println(content.key()); } if (listObjResponse.nextContinuationToken() == null) { done = true; } listObjectsReqManual = listObjectsReqManual.toBuilder() .continuationToken(listObjResponse.nextContinuationToken()) .build(); } } public static void anotherListExample(S3Client s3, String bucketName) { ListObjectsV2Request listReq = ListObjectsV2Request.builder() .bucket(bucketName) .maxKeys(1) .build(); ListObjectsV2Iterable listRes = s3.listObjectsV2Paginator(listReq); // Process response pages. listRes.stream() .flatMap(r -> r.contents().stream()) .forEach(content -> System.out.println(" Key: " + content.key() + " size = " + content.size())); // Helper method to work with paginated collection of items directly. listRes.contents().stream() .forEach(content -> System.out.println(" Key: " + content.key() + " size = " + content.size())); for (S3Object content : listRes.contents()) { System.out.println(" Key: " + content.key() + " size = " + content.size()); } } public static void deleteObjectFromBucket(S3Client s3, String bucketName, String key) { DeleteObjectRequest deleteObjectRequest = DeleteObjectRequest.builder() .bucket(bucketName) .key(key) .build(); s3.deleteObject(deleteObjectRequest); System.out.println(key + " was deleted"); } public static String copyBucketObject(S3Client s3, String fromBucket, String objectKey, String toBucket) { String encodedUrl = null; try { encodedUrl = URLEncoder.encode(fromBucket + "/" + objectKey, StandardCharsets.UTF_8.toString()); } catch (UnsupportedEncodingException e) { System.out.println("URL could not be encoded: " + e.getMessage()); } CopyObjectRequest copyReq = CopyObjectRequest.builder() .copySource(encodedUrl) .destinationBucket(toBucket) .destinationKey(objectKey) .build(); try { CopyObjectResponse copyRes = s3.copyObject(copyReq); System.out.println("The " + objectKey + " was copied to " + toBucket); return copyRes.copyObjectResult().toString(); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } return ""; } }

The following code example shows how to get the legal hold configuration of an S3 bucket.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

// Get the legal hold details for an S3 object. public ObjectLockLegalHold getObjectLegalHold(String bucketName, String objectKey) { try { GetObjectLegalHoldRequest legalHoldRequest = GetObjectLegalHoldRequest.builder() .bucket(bucketName) .key(objectKey) .build(); GetObjectLegalHoldResponse response = getClient().getObjectLegalHold(legalHoldRequest); System.out.println("Object legal hold for " + objectKey + " in " + bucketName + ":\n\tStatus: " + response.legalHold().status()); return response.legalHold(); } catch (S3Exception ex) { System.out.println("\tUnable to fetch legal hold: '" + ex.getMessage() + "'"); } return null; }

The following code example shows how to work with S3 object lock features.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Run an interactive scenario demonstrating Amazon S3 object lock features.

import software.amazon.awssdk.services.s3.model.ObjectLockLegalHold; import software.amazon.awssdk.services.s3.model.ObjectLockRetention; import java.io.BufferedWriter; import java.io.IOException; import java.time.LocalDateTime; import java.time.format.DateTimeFormatter; import java.util.ArrayList; import java.util.List; import java.util.Scanner; import java.util.stream.Collectors; /* Before running this Java V2 code example, set up your development environment, including your credentials. For more information, see the following documentation topic: https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/setup.html This Java example performs the following tasks: 1. Create test Amazon Simple Storage Service (S3) buckets with different lock policies. 2. Upload sample objects to each bucket. 3. Set some Legal Hold and Retention Periods on objects and buckets. 4. Investigate lock policies by viewing settings or attempting to delete or overwrite objects. 5. Clean up objects and buckets. */ public class S3ObjectLockWorkflow { public static final String DASHES = new String(new char[80]).replace("\0", "-"); static String bucketName; static S3LockActions s3LockActions; private static final List<String> bucketNames = new ArrayList<>(); private static final List<String> fileNames = new ArrayList<>(); public static void main(String[] args) { // Get the current date and time to ensure bucket name is unique. LocalDateTime currentTime = LocalDateTime.now(); // Format the date and time as a string. DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyyMMddHHmmss"); String timeStamp = currentTime.format(formatter); s3LockActions = new S3LockActions(); bucketName = "bucket"+timeStamp; Scanner scanner = new Scanner(System.in); System.out.println(DASHES); System.out.println("Welcome to the Amazon Simple Storage Service (S3) Object Locking Workflow Scenario."); System.out.println("Press Enter to continue..."); scanner.nextLine(); configurationSetup(); System.out.println(DASHES); System.out.println(DASHES); setup(); System.out.println("Setup is complete. Press Enter to continue..."); scanner.nextLine(); System.out.println(DASHES); System.out.println(DASHES); System.out.println("Lets present the user with choices."); System.out.println("Press Enter to continue..."); scanner.nextLine(); demoActionChoices() ; System.out.println(DASHES); System.out.println(DASHES); System.out.println("Would you like to clean up the resources? (y/n)"); String delAns = scanner.nextLine().trim(); if (delAns.equalsIgnoreCase("y")) { cleanup(); System.out.println("Clean up is complete."); } System.out.println("Press Enter to continue..."); scanner.nextLine(); System.out.println(DASHES); System.out.println(DASHES); System.out.println("Amazon S3 Object Locking Workflow is complete."); System.out.println(DASHES); } // Present the user with the demo action choices. public static void demoActionChoices() { String[] choices = { "List all files in buckets.", "Attempt to delete a file.", "Attempt to delete a file with retention period bypass.", "Attempt to overwrite a file.", "View the object and bucket retention settings for a file.", "View the legal hold settings for a file.", "Finish the workflow." }; int choice = 0; while (true) { System.out.println(DASHES); choice = getChoiceResponse("Explore the S3 locking features by selecting one of the following choices:", choices); System.out.println(DASHES); System.out.println("You selected "+choices[choice]); switch (choice) { case 0 -> { s3LockActions.listBucketsAndObjects(bucketNames, true); } case 1 -> { System.out.println("Enter the number of the object to delete:"); List<S3InfoObject> allFiles = s3LockActions.listBucketsAndObjects(bucketNames, true); List<String> fileKeys = allFiles.stream().map(f -> f.getKeyName()).collect(Collectors.toList()); String[] fileKeysArray = fileKeys.toArray(new String[0]); int fileChoice = getChoiceResponse(null, fileKeysArray); String objectKey = fileKeys.get(fileChoice); String bucketName = allFiles.get(fileChoice).getBucketName(); String version = allFiles.get(fileChoice).getVersion(); s3LockActions.deleteObjectFromBucket(bucketName, objectKey, false, version); } case 2 -> { System.out.println("Enter the number of the object to delete:"); List<S3InfoObject> allFiles = s3LockActions.listBucketsAndObjects(bucketNames, true); List<String> fileKeys = allFiles.stream().map(f -> f.getKeyName()).collect(Collectors.toList()); String[] fileKeysArray = fileKeys.toArray(new String[0]); int fileChoice = getChoiceResponse(null, fileKeysArray); String objectKey = fileKeys.get(fileChoice); String bucketName = allFiles.get(fileChoice).getBucketName(); String version = allFiles.get(fileChoice).getVersion(); s3LockActions.deleteObjectFromBucket(bucketName, objectKey, true, version); } case 3 -> { System.out.println("Enter the number of the object to overwrite:"); List<S3InfoObject> allFiles = s3LockActions.listBucketsAndObjects(bucketNames, true); List<String> fileKeys = allFiles.stream().map(f -> f.getKeyName()).collect(Collectors.toList()); String[] fileKeysArray = fileKeys.toArray(new String[0]); int fileChoice = getChoiceResponse(null, fileKeysArray); String objectKey = fileKeys.get(fileChoice); String bucketName = allFiles.get(fileChoice).getBucketName(); // Attempt to overwrite the file. try (BufferedWriter writer = new BufferedWriter(new java.io.FileWriter(objectKey))) { writer.write("This is a modified text."); } catch (IOException e) { e.printStackTrace(); } s3LockActions.uploadFile(bucketName, objectKey, objectKey); } case 4 -> { System.out.println("Enter the number of the object to overwrite:"); List<S3InfoObject> allFiles = s3LockActions.listBucketsAndObjects(bucketNames, true); List<String> fileKeys = allFiles.stream().map(f -> f.getKeyName()).collect(Collectors.toList()); String[] fileKeysArray = fileKeys.toArray(new String[0]); int fileChoice = getChoiceResponse(null, fileKeysArray); String objectKey = fileKeys.get(fileChoice); String bucketName = allFiles.get(fileChoice).getBucketName(); s3LockActions.getObjectRetention(bucketName, objectKey); } case 5 -> { System.out.println("Enter the number of the object to view:"); List<S3InfoObject> allFiles = s3LockActions.listBucketsAndObjects(bucketNames, true); List<String> fileKeys = allFiles.stream().map(f -> f.getKeyName()).collect(Collectors.toList()); String[] fileKeysArray = fileKeys.toArray(new String[0]); int fileChoice = getChoiceResponse(null, fileKeysArray); String objectKey = fileKeys.get(fileChoice); String bucketName = allFiles.get(fileChoice).getBucketName(); s3LockActions.getObjectLegalHold(bucketName, objectKey); s3LockActions.getBucketObjectLockConfiguration(bucketName); } case 6 -> { System.out.println("Exiting the workflow..."); return; } default -> { System.out.println("Invalid choice. Please select again."); } } } } // Clean up the resources from the scenario. private static void cleanup() { List<S3InfoObject> allFiles = s3LockActions.listBucketsAndObjects(bucketNames, false); for (S3InfoObject fileInfo : allFiles) { String bucketName = fileInfo.getBucketName(); String key = fileInfo.getKeyName(); String version = fileInfo.getVersion(); if (bucketName.contains("lock-enabled") || (bucketName.contains("retention-after-creation"))) { ObjectLockLegalHold legalHold = s3LockActions.getObjectLegalHold(bucketName, key); if (legalHold != null) { String holdStatus = legalHold.status().name(); System.out.println(holdStatus); if (holdStatus.compareTo("ON") == 0) { s3LockActions.modifyObjectLegalHold(bucketName, key, false); } } // Check for a retention period. ObjectLockRetention retention = s3LockActions.getObjectRetention(bucketName, key); boolean hasRetentionPeriod ; hasRetentionPeriod = retention != null; s3LockActions.deleteObjectFromBucket(bucketName, key,hasRetentionPeriod, version); } else { System.out.println(bucketName +" objects do not have a legal lock"); s3LockActions.deleteObjectFromBucket(bucketName, key,false, version); } } // Delete the buckets. System.out.println("Delete "+bucketName); for (String bucket : bucketNames){ s3LockActions.deleteBucketByName(bucket); } } private static void setup() { Scanner scanner = new Scanner(System.in); System.out.println(""" For this workflow, we will use the AWS SDK for Java to create several S3 buckets and files to demonstrate working with S3 locking features. """); System.out.println("S3 buckets can be created either with or without object lock enabled."); System.out.println("Press Enter to continue..."); scanner.nextLine(); // Create three S3 buckets. s3LockActions.createBucketWithLockOptions(false, bucketNames.get(0)); s3LockActions.createBucketWithLockOptions(true, bucketNames.get(1)); s3LockActions.createBucketWithLockOptions(false, bucketNames.get(2)); System.out.println("Press Enter to continue."); scanner.nextLine(); System.out.println("Bucket "+bucketNames.get(2) +" will be configured to use object locking with a default retention period."); s3LockActions.modifyBucketDefaultRetention(bucketNames.get(2)); System.out.println("Press Enter to continue."); scanner.nextLine(); System.out.println("Object lock policies can also be added to existing buckets. For this example, we will use "+bucketNames.get(1)); s3LockActions.enableObjectLockOnBucket(bucketNames.get(1)); System.out.println("Press Enter to continue."); scanner.nextLine(); // Upload some files to the buckets. System.out.println("Now let's add some test files:"); String fileName = "exampleFile.txt"; int fileCount = 2; try (BufferedWriter writer = new BufferedWriter(new java.io.FileWriter(fileName))) { writer.write("This is a sample file for uploading to a bucket."); } catch (IOException e) { e.printStackTrace(); } for (String bucketName : bucketNames){ for (int i = 0; i < fileCount; i++) { // Get the file name without extension. String fileNameWithoutExtension = java.nio.file.Paths.get(fileName).getFileName().toString(); int extensionIndex = fileNameWithoutExtension.lastIndexOf('.'); if (extensionIndex > 0) { fileNameWithoutExtension = fileNameWithoutExtension.substring(0, extensionIndex); } // Create the numbered file names. String numberedFileName = fileNameWithoutExtension + i + getFileExtension(fileName); fileNames.add(numberedFileName); s3LockActions.uploadFile(bucketName, numberedFileName, fileName); } } String question = null; System.out.print("Press Enter to continue..."); scanner.nextLine(); System.out.println("Now we can set some object lock policies on individual files:"); for (String bucketName : bucketNames) { for (int i = 0; i < fileNames.size(); i++){ // No modifications to the objects in the first bucket. if (!bucketName.equals(bucketNames.get(0))) { String exampleFileName = fileNames.get(i); switch (i) { case 0 -> { question = "Would you like to add a legal hold to " + exampleFileName + " in " + bucketName + " (y/n)?"; System.out.println(question); String ans = scanner.nextLine().trim(); if (ans.equalsIgnoreCase("y")) { System.out.println("**** You have selected to put a legal hold " + exampleFileName); // Set a legal hold. s3LockActions.modifyObjectLegalHold(bucketName, exampleFileName, true); } } case 1 -> { """ Would you like to add a 1 day Governance retention period to %s in %s (y/n)? Reminder: Only a user with the s3:BypassGovernanceRetention permission will be able to delete this file or its bucket until the retention period has expired. """.formatted(exampleFileName, bucketName); System.out.println(question); String ans2 = scanner.nextLine().trim(); if (ans2.equalsIgnoreCase("y")) { s3LockActions.modifyObjectRetentionPeriod(bucketName, exampleFileName); } } } } } } } // Get file extension. private static String getFileExtension(String fileName) { int dotIndex = fileName.lastIndexOf('.'); if (dotIndex > 0) { return fileName.substring(dotIndex); } return ""; } public static void configurationSetup() { String noLockBucketName = bucketName + "-no-lock"; String lockEnabledBucketName = bucketName + "-lock-enabled"; String retentionAfterCreationBucketName = bucketName + "-retention-after-creation"; bucketNames.add(noLockBucketName); bucketNames.add(lockEnabledBucketName); bucketNames.add(retentionAfterCreationBucketName); } public static int getChoiceResponse(String question, String[] choices) { Scanner scanner = new Scanner(System.in); if (question != null) { System.out.println(question); for (int i = 0; i < choices.length; i++) { System.out.println("\t" + (i + 1) + ". " + choices[i]); } } int choiceNumber = 0; while (choiceNumber < 1 || choiceNumber > choices.length) { String choice = scanner.nextLine(); try { choiceNumber = Integer.parseInt(choice); } catch (NumberFormatException e) { System.out.println("Invalid choice. Please enter a valid number."); } } return choiceNumber - 1; } }

A wrapper class for S3 functions.

import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.BucketVersioningStatus; import software.amazon.awssdk.services.s3.model.ChecksumAlgorithm; import software.amazon.awssdk.services.s3.model.CreateBucketRequest; import software.amazon.awssdk.services.s3.model.DefaultRetention; import software.amazon.awssdk.services.s3.model.DeleteBucketRequest; import software.amazon.awssdk.services.s3.model.DeleteObjectRequest; import software.amazon.awssdk.services.s3.model.GetObjectLegalHoldRequest; import software.amazon.awssdk.services.s3.model.GetObjectLegalHoldResponse; import software.amazon.awssdk.services.s3.model.GetObjectLockConfigurationRequest; import software.amazon.awssdk.services.s3.model.GetObjectLockConfigurationResponse; import software.amazon.awssdk.services.s3.model.GetObjectRetentionRequest; import software.amazon.awssdk.services.s3.model.GetObjectRetentionResponse; import software.amazon.awssdk.services.s3.model.HeadBucketRequest; import software.amazon.awssdk.services.s3.model.ListObjectVersionsRequest; import software.amazon.awssdk.services.s3.model.ListObjectVersionsResponse; import software.amazon.awssdk.services.s3.model.MFADelete; import software.amazon.awssdk.services.s3.model.ObjectLockConfiguration; import software.amazon.awssdk.services.s3.model.ObjectLockEnabled; import software.amazon.awssdk.services.s3.model.ObjectLockLegalHold; import software.amazon.awssdk.services.s3.model.ObjectLockLegalHoldStatus; import software.amazon.awssdk.services.s3.model.ObjectLockRetention; import software.amazon.awssdk.services.s3.model.ObjectLockRetentionMode; import software.amazon.awssdk.services.s3.model.ObjectLockRule; import software.amazon.awssdk.services.s3.model.PutBucketVersioningRequest; import software.amazon.awssdk.services.s3.model.PutObjectLegalHoldRequest; import software.amazon.awssdk.services.s3.model.PutObjectLockConfigurationRequest; import software.amazon.awssdk.services.s3.model.PutObjectRequest; import software.amazon.awssdk.services.s3.model.PutObjectResponse; import software.amazon.awssdk.services.s3.model.PutObjectRetentionRequest; import software.amazon.awssdk.services.s3.model.S3Exception; import software.amazon.awssdk.services.s3.model.VersioningConfiguration; import software.amazon.awssdk.services.s3.waiters.S3Waiter; import java.nio.file.Path; import java.nio.file.Paths; import java.time.Instant; import java.time.ZoneId; import java.time.ZonedDateTime; import java.time.format.DateTimeFormatter; import java.time.temporal.ChronoUnit; import java.util.List; import java.util.concurrent.atomic.AtomicInteger; import java.util.stream.Collectors; // Contains application logic for the Amazon S3 operations used in this workflow. public class S3LockActions { private static S3Client getClient() { return S3Client.builder() .region(Region.US_EAST_1) .build(); } // Set or modify a retention period on an object in an S3 bucket. public void modifyObjectRetentionPeriod(String bucketName, String objectKey) { // Calculate the instant one day from now. Instant futureInstant = Instant.now().plus(1, ChronoUnit.DAYS); // Convert the Instant to a ZonedDateTime object with a specific time zone. ZonedDateTime zonedDateTime = futureInstant.atZone(ZoneId.systemDefault()); // Define a formatter for human-readable output. DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss"); // Format the ZonedDateTime object to a human-readable date string. String humanReadableDate = formatter.format(zonedDateTime); // Print the formatted date string. System.out.println("Formatted Date: " + humanReadableDate); ObjectLockRetention retention = ObjectLockRetention.builder() .mode(ObjectLockRetentionMode.GOVERNANCE) .retainUntilDate(futureInstant) .build(); PutObjectRetentionRequest retentionRequest = PutObjectRetentionRequest.builder() .bucket(bucketName) .key(objectKey) .retention(retention) .build(); getClient().putObjectRetention(retentionRequest); System.out.println("Set retention for "+objectKey +" in " +bucketName +" until "+ humanReadableDate +"."); } // Get the legal hold details for an S3 object. public ObjectLockLegalHold getObjectLegalHold(String bucketName, String objectKey) { try { GetObjectLegalHoldRequest legalHoldRequest = GetObjectLegalHoldRequest.builder() .bucket(bucketName) .key(objectKey) .build(); GetObjectLegalHoldResponse response = getClient().getObjectLegalHold(legalHoldRequest); System.out.println("Object legal hold for " + objectKey + " in " + bucketName + ":\n\tStatus: " + response.legalHold().status()); return response.legalHold(); } catch (S3Exception ex) { System.out.println("\tUnable to fetch legal hold: '" + ex.getMessage() + "'"); } return null; } // Create a new Amazon S3 bucket with object lock options. public void createBucketWithLockOptions(boolean enableObjectLock, String bucketName) { S3Waiter s3Waiter = getClient().waiter(); CreateBucketRequest bucketRequest = CreateBucketRequest.builder() .bucket(bucketName) .objectLockEnabledForBucket(enableObjectLock) .build(); getClient().createBucket(bucketRequest); HeadBucketRequest bucketRequestWait = HeadBucketRequest.builder() .bucket(bucketName) .build(); // Wait until the bucket is created and print out the response. s3Waiter.waitUntilBucketExists(bucketRequestWait); System.out.println(bucketName + " is ready"); } public List<S3InfoObject> listBucketsAndObjects(List<String> bucketNames, Boolean interactive) { AtomicInteger counter = new AtomicInteger(0); // Initialize counter. return bucketNames.stream() .flatMap(bucketName -> listBucketObjectsAndVersions(bucketName).versions().stream() .map(version -> { S3InfoObject s3InfoObject = new S3InfoObject(); s3InfoObject.setBucketName(bucketName); s3InfoObject.setVersion(version.versionId()); s3InfoObject.setKeyName(version.key()); return s3InfoObject; })) .peek(s3InfoObject -> { int i = counter.incrementAndGet(); // Increment and get the updated value. if (interactive) { System.out.println(i + ": "+ s3InfoObject.getKeyName()); System.out.printf("%5s Bucket name: %s\n", "", s3InfoObject.getBucketName()); System.out.printf("%5s Version: %s\n", "", s3InfoObject.getVersion()); } }) .collect(Collectors.toList()); } public ListObjectVersionsResponse listBucketObjectsAndVersions(String bucketName) { ListObjectVersionsRequest versionsRequest = ListObjectVersionsRequest.builder() .bucket(bucketName) .build(); return getClient().listObjectVersions(versionsRequest); } // Set or modify a retention period on an S3 bucket. public void modifyBucketDefaultRetention(String bucketName) { VersioningConfiguration versioningConfiguration = VersioningConfiguration.builder() .mfaDelete(MFADelete.DISABLED) .status(BucketVersioningStatus.ENABLED) .build(); PutBucketVersioningRequest versioningRequest = PutBucketVersioningRequest.builder() .bucket(bucketName) .versioningConfiguration(versioningConfiguration) .build(); getClient().putBucketVersioning(versioningRequest); DefaultRetention rention = DefaultRetention.builder() .days(1) .mode(ObjectLockRetentionMode.GOVERNANCE) .build(); ObjectLockRule lockRule = ObjectLockRule.builder() .defaultRetention(rention) .build(); ObjectLockConfiguration objectLockConfiguration = ObjectLockConfiguration.builder() .objectLockEnabled(ObjectLockEnabled.ENABLED) .rule(lockRule) .build(); PutObjectLockConfigurationRequest putObjectLockConfigurationRequest = PutObjectLockConfigurationRequest.builder() .bucket(bucketName) .objectLockConfiguration(objectLockConfiguration) .build(); getClient().putObjectLockConfiguration(putObjectLockConfigurationRequest) ; System.out.println("Added a default retention to bucket "+bucketName +"."); } // Enable object lock on an existing bucket. public void enableObjectLockOnBucket(String bucketName) { try { VersioningConfiguration versioningConfiguration = VersioningConfiguration.builder() .status(BucketVersioningStatus.ENABLED) .build(); PutBucketVersioningRequest putBucketVersioningRequest = PutBucketVersioningRequest.builder() .bucket(bucketName) .versioningConfiguration(versioningConfiguration) .build(); // Enable versioning on the bucket. getClient().putBucketVersioning(putBucketVersioningRequest); PutObjectLockConfigurationRequest request = PutObjectLockConfigurationRequest.builder() .bucket(bucketName) .objectLockConfiguration(ObjectLockConfiguration.builder() .objectLockEnabled(ObjectLockEnabled.ENABLED) .build()) .build(); getClient().putObjectLockConfiguration(request); System.out.println("Successfully enabled object lock on "+bucketName); } catch (S3Exception ex) { System.out.println("Error modifying object lock: '" + ex.getMessage() + "'"); } } public void uploadFile(String bucketName, String objectName, String filePath) { Path file = Paths.get(filePath); PutObjectRequest request = PutObjectRequest.builder() .bucket(bucketName) .key(objectName) .checksumAlgorithm(ChecksumAlgorithm.SHA256) .build(); PutObjectResponse response = getClient().putObject(request, file); if (response != null) { System.out.println("\tSuccessfully uploaded " + objectName + " to " + bucketName + "."); } else { System.out.println("\tCould not upload " + objectName + " to " + bucketName + "."); } } // Set or modify a legal hold on an object in an S3 bucket. public void modifyObjectLegalHold(String bucketName, String objectKey, boolean legalHoldOn) { ObjectLockLegalHold legalHold ; if (legalHoldOn) { legalHold = ObjectLockLegalHold.builder() .status(ObjectLockLegalHoldStatus.ON) .build(); } else { legalHold = ObjectLockLegalHold.builder() .status(ObjectLockLegalHoldStatus.OFF) .build(); } PutObjectLegalHoldRequest legalHoldRequest = PutObjectLegalHoldRequest.builder() .bucket(bucketName) .key(objectKey) .legalHold(legalHold) .build(); getClient().putObjectLegalHold(legalHoldRequest) ; System.out.println("Modified legal hold for "+ objectKey +" in "+bucketName +"."); } // Delete an object from a specific bucket. public void deleteObjectFromBucket(String bucketName, String objectKey, boolean hasRetention, String versionId) { try { DeleteObjectRequest objectRequest; if (hasRetention) { objectRequest = DeleteObjectRequest.builder() .bucket(bucketName) .key(objectKey) .versionId(versionId) .bypassGovernanceRetention(true) .build(); } else { objectRequest = DeleteObjectRequest.builder() .bucket(bucketName) .key(objectKey) .versionId(versionId) .build(); } getClient().deleteObject(objectRequest) ; System.out.println("The object was successfully deleted"); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); } } // Get the retention period for an S3 object. public ObjectLockRetention getObjectRetention(String bucketName, String key){ try { GetObjectRetentionRequest retentionRequest = GetObjectRetentionRequest.builder() .bucket(bucketName) .key(key) .build(); GetObjectRetentionResponse response = getClient().getObjectRetention(retentionRequest); System.out.println("tObject retention for "+key +" in "+ bucketName +": " + response.retention().mode() +" until "+ response.retention().retainUntilDate() +"."); return response.retention(); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); return null; } } public void deleteBucketByName(String bucketName) { try { DeleteBucketRequest request = DeleteBucketRequest.builder() .bucket(bucketName) .build(); getClient().deleteBucket(request); System.out.println(bucketName +" was deleted."); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); } } // Get the object lock configuration details for an S3 bucket. public void getBucketObjectLockConfiguration(String bucketName) { GetObjectLockConfigurationRequest objectLockConfigurationRequest = GetObjectLockConfigurationRequest.builder() .bucket(bucketName) .build(); GetObjectLockConfigurationResponse response = getClient().getObjectLockConfiguration(objectLockConfigurationRequest); System.out.println("Bucket object lock config for "+bucketName +": "); System.out.println("\tEnabled: "+response.objectLockConfiguration().objectLockEnabled()); System.out.println("\tRule: "+ response.objectLockConfiguration().rule().defaultRetention()); } }

The following code example shows how to parse Amazon S3 URIs to extract important components like the bucket name and object key.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Parse an Amazon S3 URI by using the S3Uri class.

import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.S3Uri; import software.amazon.awssdk.services.s3.S3Utilities; import java.net.URI; import java.util.List; import java.util.Map; /** * * @param s3Client - An S3Client through which you acquire an S3Uri instance. * @param s3ObjectUrl - A complex URL (String) that is used to demonstrate S3Uri * capabilities. */ public static void parseS3UriExample(S3Client s3Client, String s3ObjectUrl) { logger.info(s3ObjectUrl); // Console output: // 'https://s3.us-west-1.amazonaws.com/myBucket/resources/doc.txt?versionId=abc123&partNumber=77&partNumber=88'. // Create an S3Utilities object using the configuration of the s3Client. S3Utilities s3Utilities = s3Client.utilities(); // From a String URL create a URI object to pass to the parseUri() method. URI uri = URI.create(s3ObjectUrl); S3Uri s3Uri = s3Utilities.parseUri(uri); // If the URI contains no value for the Region, bucket or key, the SDK returns // an empty Optional. // The SDK returns decoded URI values. Region region = s3Uri.region().orElse(null); log("region", region); // Console output: 'region: us-west-1'. String bucket = s3Uri.bucket().orElse(null); log("bucket", bucket); // Console output: 'bucket: myBucket'. String key = s3Uri.key().orElse(null); log("key", key); // Console output: 'key: resources/doc.txt'. Boolean isPathStyle = s3Uri.isPathStyle(); log("isPathStyle", isPathStyle); // Console output: 'isPathStyle: true'. // If the URI contains no query parameters, the SDK returns an empty map. Map<String, List<String>> queryParams = s3Uri.rawQueryParameters(); log("rawQueryParameters", queryParams); // Console output: 'rawQueryParameters: {versionId=[abc123], partNumber=[77, // 88]}'. // Retrieve the first or all values for a query parameter as shown in the // following code. String versionId = s3Uri.firstMatchingRawQueryParameter("versionId").orElse(null); log("firstMatchingRawQueryParameter-versionId", versionId); // Console output: 'firstMatchingRawQueryParameter-versionId: abc123'. String partNumber = s3Uri.firstMatchingRawQueryParameter("partNumber").orElse(null); log("firstMatchingRawQueryParameter-partNumber", partNumber); // Console output: 'firstMatchingRawQueryParameter-partNumber: 77'. List<String> partNumbers = s3Uri.firstMatchingRawQueryParameters("partNumber"); log("firstMatchingRawQueryParameter", partNumbers); // Console output: 'firstMatchingRawQueryParameter: [77, 88]'. /* * Object keys and query parameters with reserved or unsafe characters, must be * URL-encoded. * For example replace whitespace " " with "%20". * Valid: * "https://s3.us-west-1.amazonaws.com/myBucket/object%20key?query=%5Bbrackets%5D" * Invalid: * "https://s3.us-west-1.amazonaws.com/myBucket/object key?query=[brackets]" * * Virtual-hosted-style URIs with bucket names that contain a dot, ".", the dot * must not be URL-encoded. * Valid: "https://my.Bucket.s3.us-west-1.amazonaws.com/key" * Invalid: "https://my%2EBucket.s3.us-west-1.amazonaws.com/key" */ } private static void log(String s3UriElement, Object element) { if (element == null) { logger.info("{}: {}", s3UriElement, "null"); } else { logger.info("{}: {}", s3UriElement, element); } }

The following code example shows how to perform a multipart upload to an Amazon S3 object.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

The code examples use the following imports.

import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.core.exception.SdkException; import software.amazon.awssdk.core.sync.RequestBody; import software.amazon.awssdk.services.s3.S3AsyncClient; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.CompletedMultipartUpload; import software.amazon.awssdk.services.s3.model.CompletedPart; import software.amazon.awssdk.services.s3.model.CreateMultipartUploadResponse; import software.amazon.awssdk.services.s3.model.PutObjectResponse; import software.amazon.awssdk.services.s3.model.UploadPartRequest; import software.amazon.awssdk.services.s3.model.UploadPartResponse; import software.amazon.awssdk.services.s3.waiters.S3Waiter; import software.amazon.awssdk.transfer.s3.S3TransferManager; import software.amazon.awssdk.transfer.s3.model.FileUpload; import software.amazon.awssdk.transfer.s3.model.UploadFileRequest; import java.io.IOException; import java.io.RandomAccessFile; import java.net.URISyntaxException; import java.net.URL; import java.nio.ByteBuffer; import java.nio.file.Paths; import java.util.ArrayList; import java.util.List; import java.util.Objects; import java.util.UUID; import java.util.concurrent.CompletableFuture;

Use the S3 Transfer Manager on top of the AWS CRT-based S3 client to transparently perform a multipart upload when the size of the content exceeds a threshold. The default threshold size is 8 MB.

public void multipartUploadWithTransferManager(String filePath) { S3TransferManager transferManager = S3TransferManager.create(); UploadFileRequest uploadFileRequest = UploadFileRequest.builder() .putObjectRequest(b -> b .bucket(bucketName) .key(key)) .source(Paths.get(filePath)) .build(); FileUpload fileUpload = transferManager.uploadFile(uploadFileRequest); fileUpload.completionFuture().join(); transferManager.close(); }

Use the S3Client API to perform a multipart upload.

public void multipartUploadWithS3Client(String filePath) { // Initiate the multipart upload. CreateMultipartUploadResponse createMultipartUploadResponse = s3Client.createMultipartUpload(b -> b .bucket(bucketName) .key(key)); String uploadId = createMultipartUploadResponse.uploadId(); // Upload the parts of the file. int partNumber = 1; List<CompletedPart> completedParts = new ArrayList<>(); ByteBuffer bb = ByteBuffer.allocate(1024 * 1024 * 5); // 5 MB byte buffer try (RandomAccessFile file = new RandomAccessFile(filePath, "r")) { long fileSize = file.length(); int position = 0; while (position < fileSize) { file.seek(position); int read = file.getChannel().read(bb); bb.flip(); // Swap position and limit before reading from the buffer. UploadPartRequest uploadPartRequest = UploadPartRequest.builder() .bucket(bucketName) .key(key) .uploadId(uploadId) .partNumber(partNumber) .build(); UploadPartResponse partResponse = s3Client.uploadPart( uploadPartRequest, RequestBody.fromByteBuffer(bb)); CompletedPart part = CompletedPart.builder() .partNumber(partNumber) .eTag(partResponse.eTag()) .build(); completedParts.add(part); bb.clear(); position += read; partNumber++; } } catch (IOException e) { logger.error(e.getMessage()); } // Complete the multipart upload. s3Client.completeMultipartUpload(b -> b .bucket(bucketName) .key(key) .uploadId(uploadId) .multipartUpload(CompletedMultipartUpload.builder().parts(completedParts).build())); }

Use the S3AsyncClient API with multipart support enabled to perform a multipart upload.

public void multipartUploadWithS3AsyncClient(String filePath) { // Enable multipart support. S3AsyncClient s3AsyncClient = S3AsyncClient.builder() .multipartEnabled(true) .build(); CompletableFuture<PutObjectResponse> response = s3AsyncClient.putObject(b -> b .bucket(bucketName) .key(key), Paths.get(filePath)); response.join(); logger.info("File uploaded in multiple 8 MiB parts using S3AsyncClient."); }

The following code example shows how to track an Amazon S3 object upload or download.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Track the progress of a file upload.

public void trackUploadFile(S3TransferManager transferManager, String bucketName, String key, URI filePathURI) { UploadFileRequest uploadFileRequest = UploadFileRequest.builder() .putObjectRequest(b -> b.bucket(bucketName).key(key)) .addTransferListener(LoggingTransferListener.create()) // Add listener. .source(Paths.get(filePathURI)) .build(); FileUpload fileUpload = transferManager.uploadFile(uploadFileRequest); fileUpload.completionFuture().join(); /* The SDK provides a LoggingTransferListener implementation of the TransferListener interface. You can also implement the interface to provide your own logic. Configure log4J2 with settings such as the following. <Configuration status="WARN"> <Appenders> <Console name="AlignedConsoleAppender" target="SYSTEM_OUT"> <PatternLayout pattern="%m%n"/> </Console> </Appenders> <Loggers> <logger name="software.amazon.awssdk.transfer.s3.progress.LoggingTransferListener" level="INFO" additivity="false"> <AppenderRef ref="AlignedConsoleAppender"/> </logger> </Loggers> </Configuration> Log4J2 logs the progress. The following is example output for a 21.3 MB file upload. Transfer initiated... | | 0.0% |==== | 21.1% |============ | 60.5% |====================| 100.0% Transfer complete! */ }

Track the progress of a file download.

public void trackDownloadFile(S3TransferManager transferManager, String bucketName, String key, String downloadedFileWithPath) { DownloadFileRequest downloadFileRequest = DownloadFileRequest.builder() .getObjectRequest(b -> b.bucket(bucketName).key(key)) .addTransferListener(LoggingTransferListener.create()) // Add listener. .destination(Paths.get(downloadedFileWithPath)) .build(); FileDownload downloadFile = transferManager.downloadFile(downloadFileRequest); CompletedFileDownload downloadResult = downloadFile.completionFuture().join(); /* The SDK provides a LoggingTransferListener implementation of the TransferListener interface. You can also implement the interface to provide your own logic. Configure log4J2 with settings such as the following. <Configuration status="WARN"> <Appenders> <Console name="AlignedConsoleAppender" target="SYSTEM_OUT"> <PatternLayout pattern="%m%n"/> </Console> </Appenders> <Loggers> <logger name="software.amazon.awssdk.transfer.s3.progress.LoggingTransferListener" level="INFO" additivity="false"> <AppenderRef ref="AlignedConsoleAppender"/> </logger> </Loggers> </Configuration> Log4J2 logs the progress. The following is example output for a 21.3 MB file download. Transfer initiated... |======= | 39.4% |=============== | 78.8% |====================| 100.0% Transfer complete! */ }
  • For API details, see the following topics in AWS SDK for Java 2.x API Reference.

The following code example shows how to upload a local directory recursively to an Amazon Simple Storage Service (Amazon S3) bucket.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Use an S3TransferManager to upload a local directory. View the complete file and test.

import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.services.s3.model.ObjectIdentifier; import software.amazon.awssdk.transfer.s3.S3TransferManager; import software.amazon.awssdk.transfer.s3.model.CompletedDirectoryUpload; import software.amazon.awssdk.transfer.s3.model.DirectoryUpload; import software.amazon.awssdk.transfer.s3.model.UploadDirectoryRequest; import java.net.URI; import java.net.URISyntaxException; import java.net.URL; import java.nio.file.Paths; import java.util.UUID; public Integer uploadDirectory(S3TransferManager transferManager, URI sourceDirectory, String bucketName) { DirectoryUpload directoryUpload = transferManager.uploadDirectory(UploadDirectoryRequest.builder() .source(Paths.get(sourceDirectory)) .bucket(bucketName) .build()); CompletedDirectoryUpload completedDirectoryUpload = directoryUpload.completionFuture().join(); completedDirectoryUpload.failedTransfers() .forEach(fail -> logger.warn("Object [{}] failed to transfer", fail.toString())); return completedDirectoryUpload.failedTransfers().size(); }
  • For API details, see UploadDirectory in AWS SDK for Java 2.x API Reference.

The following code example shows how to upload or download large files to and from Amazon S3.

For more information, see Uploading an object using multipart upload.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Call functions that transfer files to and from an S3 bucket using the S3TransferManager.

public Integer downloadObjectsToDirectory(S3TransferManager transferManager, URI destinationPathURI, String bucketName) { DirectoryDownload directoryDownload = transferManager.downloadDirectory(DownloadDirectoryRequest.builder() .destination(Paths.get(destinationPathURI)) .bucket(bucketName) .build()); CompletedDirectoryDownload completedDirectoryDownload = directoryDownload.completionFuture().join(); completedDirectoryDownload.failedTransfers() .forEach(fail -> logger.warn("Object [{}] failed to transfer", fail.toString())); return completedDirectoryDownload.failedTransfers().size(); }

Upload an entire local directory.

public Integer uploadDirectory(S3TransferManager transferManager, URI sourceDirectory, String bucketName) { DirectoryUpload directoryUpload = transferManager.uploadDirectory(UploadDirectoryRequest.builder() .source(Paths.get(sourceDirectory)) .bucket(bucketName) .build()); CompletedDirectoryUpload completedDirectoryUpload = directoryUpload.completionFuture().join(); completedDirectoryUpload.failedTransfers() .forEach(fail -> logger.warn("Object [{}] failed to transfer", fail.toString())); return completedDirectoryUpload.failedTransfers().size(); }

Upload a single file.

public String uploadFile(S3TransferManager transferManager, String bucketName, String key, URI filePathURI) { UploadFileRequest uploadFileRequest = UploadFileRequest.builder() .putObjectRequest(b -> b.bucket(bucketName).key(key)) .source(Paths.get(filePathURI)) .build(); FileUpload fileUpload = transferManager.uploadFile(uploadFileRequest); CompletedFileUpload uploadResult = fileUpload.completionFuture().join(); return uploadResult.response().eTag(); }

The following code example shows how to upload a stream of unknown size to an Amazon S3 object.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Use the AWS CRT-based S3 Client.

import com.example.s3.util.AsyncExampleUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.core.async.AsyncRequestBody; import software.amazon.awssdk.core.async.BlockingInputStreamAsyncRequestBody; import software.amazon.awssdk.core.exception.SdkException; import software.amazon.awssdk.services.s3.S3AsyncClient; import software.amazon.awssdk.services.s3.model.PutObjectResponse; import java.io.ByteArrayInputStream; import java.util.UUID; import java.util.concurrent.CompletableFuture; /** * @param s33CrtAsyncClient - To upload content from a stream of unknown size, use the AWS CRT-based S3 client. For more information, see * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/crt-based-s3-client.html. * @param bucketName - The name of the bucket. * @param key - The name of the object. * @return software.amazon.awssdk.services.s3.model.PutObjectResponse - Returns metadata pertaining to the put object operation. */ public PutObjectResponse putObjectFromStream(S3AsyncClient s33CrtAsyncClient, String bucketName, String key) { BlockingInputStreamAsyncRequestBody body = AsyncRequestBody.forBlockingInputStream(null); // 'null' indicates a stream will be provided later. CompletableFuture<PutObjectResponse> responseFuture = s33CrtAsyncClient.putObject(r -> r.bucket(bucketName).key(key), body); // AsyncExampleUtils.randomString() returns a random string up to 100 characters. String randomString = AsyncExampleUtils.randomString(); logger.info("random string to upload: {}: length={}", randomString, randomString.length()); // Provide the stream of data to be uploaded. body.writeInputStream(new ByteArrayInputStream(randomString.getBytes())); PutObjectResponse response = responseFuture.join(); // Wait for the response. logger.info("Object {} uploaded to bucket {}.", key, bucketName); return response; } }

Use the Amazon S3 Transfer Manager.

import com.example.s3.util.AsyncExampleUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.core.async.AsyncRequestBody; import software.amazon.awssdk.core.async.BlockingInputStreamAsyncRequestBody; import software.amazon.awssdk.core.exception.SdkException; import software.amazon.awssdk.transfer.s3.S3TransferManager; import software.amazon.awssdk.transfer.s3.model.CompletedUpload; import software.amazon.awssdk.transfer.s3.model.Upload; import java.io.ByteArrayInputStream; import java.util.UUID; /** * @param transferManager - To upload content from a stream of unknown size, use the S3TransferManager based on the AWS CRT-based S3 client. * For more information, see https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/transfer-manager.html. * @param bucketName - The name of the bucket. * @param key - The name of the object. * @return - software.amazon.awssdk.transfer.s3.model.CompletedUpload - The result of the completed upload. */ public CompletedUpload uploadStream(S3TransferManager transferManager, String bucketName, String key) { BlockingInputStreamAsyncRequestBody body = AsyncRequestBody.forBlockingInputStream(null); // 'null' indicates a stream will be provided later. Upload upload = transferManager.upload(builder -> builder .requestBody(body) .putObjectRequest(req -> req.bucket(bucketName).key(key)) .build()); // AsyncExampleUtils.randomString() returns a random string up to 100 characters. String randomString = AsyncExampleUtils.randomString(); logger.info("random string to upload: {}: length={}", randomString, randomString.length()); // Provide the stream of data to be uploaded. body.writeInputStream(new ByteArrayInputStream(randomString.getBytes())); return upload.completionFuture().join(); } }

The following code example shows how to use checksums to work with an Amazon S3 object.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

The code examples use a subset of the following imports.

import org.slf4j.Logger; import org.slf4j.LoggerFactory; import software.amazon.awssdk.core.exception.SdkException; import software.amazon.awssdk.core.sync.RequestBody; import software.amazon.awssdk.services.s3.S3Client; import software.amazon.awssdk.services.s3.model.ChecksumAlgorithm; import software.amazon.awssdk.services.s3.model.ChecksumMode; import software.amazon.awssdk.services.s3.model.CompletedMultipartUpload; import software.amazon.awssdk.services.s3.model.CompletedPart; import software.amazon.awssdk.services.s3.model.CreateMultipartUploadResponse; import software.amazon.awssdk.services.s3.model.GetObjectResponse; import software.amazon.awssdk.services.s3.model.UploadPartRequest; import software.amazon.awssdk.services.s3.model.UploadPartResponse; import software.amazon.awssdk.services.s3.waiters.S3Waiter; import software.amazon.awssdk.transfer.s3.S3TransferManager; import software.amazon.awssdk.transfer.s3.model.FileUpload; import software.amazon.awssdk.transfer.s3.model.UploadFileRequest; import java.io.FileInputStream; import java.io.IOException; import java.io.RandomAccessFile; import java.net.URISyntaxException; import java.net.URL; import java.nio.ByteBuffer; import java.nio.file.Paths; import java.security.DigestInputStream; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.ArrayList; import java.util.Base64; import java.util.List; import java.util.Objects; import java.util.UUID;

Specify a checksum algorithm for the putObject method when you build the PutObjectRequest.

public void putObjectWithChecksum() { s3Client.putObject(b -> b .bucket(bucketName) .key(key) .checksumAlgorithm(ChecksumAlgorithm.CRC32), RequestBody.fromString("This is a test")); }

Verify the checksum for the getObject method when you build the GetObjectRequest.

public GetObjectResponse getObjectWithChecksum() { return s3Client.getObject(b -> b .bucket(bucketName) .key(key) .checksumMode(ChecksumMode.ENABLED)) .response(); }

Pre-calculate a checksum for the putObject method when you build the PutObjectRequest.

public void putObjectWithPrecalculatedChecksum(String filePath) { String checksum = calculateChecksum(filePath, "SHA-256"); s3Client.putObject((b -> b .bucket(bucketName) .key(key) .checksumSHA256(checksum)), RequestBody.fromFile(Paths.get(filePath))); }

Use the S3 Transfer Manager on top of the AWS CRT-based S3 client to transparently perform a multipart upload when the size of the content exceeds a threshold. The default threshold size is 8 MB.

You can specify a checksum algorithm for the SDK to use. By default, the SDK uses the CRC32 algorithm.

public void multipartUploadWithChecksumTm(String filePath) { S3TransferManager transferManager = S3TransferManager.create(); UploadFileRequest uploadFileRequest = UploadFileRequest.builder() .putObjectRequest(b -> b .bucket(bucketName) .key(key) .checksumAlgorithm(ChecksumAlgorithm.SHA1)) .source(Paths.get(filePath)) .build(); FileUpload fileUpload = transferManager.uploadFile(uploadFileRequest); fileUpload.completionFuture().join(); transferManager.close(); }

Use the S3Client API or (S3AsyncClient API) to perform a multipart upload. If you specify an additional checksum, you must specify the algorithm to use on the initiation of the upload. You must also specify the algorithm for each part request and provide the checksum calculated for each part after it is uploaded.

public void multipartUploadWithChecksumS3Client(String filePath) { ChecksumAlgorithm algorithm = ChecksumAlgorithm.CRC32; // Initiate the multipart upload. CreateMultipartUploadResponse createMultipartUploadResponse = s3Client.createMultipartUpload(b -> b .bucket(bucketName) .key(key) .checksumAlgorithm(algorithm)); // Checksum specified on initiation. String uploadId = createMultipartUploadResponse.uploadId(); // Upload the parts of the file. int partNumber = 1; List<CompletedPart> completedParts = new ArrayList<>(); ByteBuffer bb = ByteBuffer.allocate(1024 * 1024 * 5); // 5 MB byte buffer try (RandomAccessFile file = new RandomAccessFile(filePath, "r")) { long fileSize = file.length(); long position = 0; while (position < fileSize) { file.seek(position); long read = file.getChannel().read(bb); bb.flip(); // Swap position and limit before reading from the buffer. UploadPartRequest uploadPartRequest = UploadPartRequest.builder() .bucket(bucketName) .key(key) .uploadId(uploadId) .checksumAlgorithm(algorithm) // Checksum specified on each part. .partNumber(partNumber) .build(); UploadPartResponse partResponse = s3Client.uploadPart( uploadPartRequest, RequestBody.fromByteBuffer(bb)); CompletedPart part = CompletedPart.builder() .partNumber(partNumber) .checksumCRC32(partResponse.checksumCRC32()) // Provide the calculated checksum. .eTag(partResponse.eTag()) .build(); completedParts.add(part); bb.clear(); position += read; partNumber++; } } catch (IOException e) { System.err.println(e.getMessage()); } // Complete the multipart upload. s3Client.completeMultipartUpload(b -> b .bucket(bucketName) .key(key) .uploadId(uploadId) .multipartUpload(CompletedMultipartUpload.builder().parts(completedParts).build())); }

Serverless examples

The following code example shows how to implement a Lambda function that receives an event triggered by uploading an object to an S3 bucket. The function retrieves the S3 bucket name and object key from the event parameter and calls the Amazon S3 API to retrieve and log the content type of the object.

SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository.

Consuming an S3 event with Lambda using Java.

// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } }