选择您的 Cookie 首选项

我们使用必要 Cookie 和类似工具提供我们的网站和服务。我们使用性能 Cookie 收集匿名统计数据,以便我们可以了解客户如何使用我们的网站并进行改进。必要 Cookie 无法停用,但您可以单击“自定义”或“拒绝”来拒绝性能 Cookie。

如果您同意,AWS 和经批准的第三方还将使用 Cookie 提供有用的网站功能、记住您的首选项并显示相关内容,包括相关广告。要接受或拒绝所有非必要 Cookie,请单击“接受”或“拒绝”。要做出更详细的选择,请单击“自定义”。

Learn the basics of S3 Directory Buckets with an AWS SDK

聚焦模式
Learn the basics of S3 Directory Buckets with an AWS SDK - Amazon Simple Storage Service
此页面尚未翻译为您的语言。 请求翻译

The following code examples show how to:

  • Set up a VPC and VPC Endpoint.

  • Set up the Policies, Roles, and User to work with S3 directory buckets and the S3 Express One Zone storage class.

  • Create two S3 Clients.

  • Create two buckets.

  • Create an object and copy it over.

  • Demonstrate performance difference.

  • Populate the buckets to show the lexicographical difference.

  • Prompt the user to see if they want to clean up the resources.

Java
SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Run an interactive scenario demonstrating Amazon S3 features.

public class S3DirectoriesScenario { public static final String DASHES = new String(new char[80]).replace("\0", "-"); private static final Logger logger = LoggerFactory.getLogger(S3DirectoriesScenario.class); static Scanner scanner = new Scanner(System.in); private static S3AsyncClient mS3RegularClient; private static S3AsyncClient mS3ExpressClient; private static String mdirectoryBucketName; private static String mregularBucketName; private static String stackName = "cfn-stack-s3-express-basics--" + UUID.randomUUID(); private static String regularUser = ""; private static String vpcId = ""; private static String expressUser = ""; private static String vpcEndpointId = ""; private static final S3DirectoriesActions s3DirectoriesActions = new S3DirectoriesActions(); public static void main(String[] args) { try { s3ExpressScenario(); } catch (RuntimeException e) { logger.info(e.getMessage()); } } // Runs the scenario. private static void s3ExpressScenario() { logger.info(DASHES); logger.info("Welcome to the Amazon S3 Express Basics demo using AWS SDK for Java V2."); logger.info(""" Let's get started! First, please note that S3 Express One Zone works best when working within the AWS infrastructure, specifically when working in the same Availability Zone (AZ). To see the best results in this example and when you implement directory buckets into your infrastructure, it is best to put your compute resources in the same AZ as your directory bucket. """); waitForInputToContinue(scanner); logger.info(DASHES); // Create an optional VPC and create 2 IAM users. UserNames userNames = createVpcUsers(); String expressUserName = userNames.getExpressUserName(); String regularUserName = userNames.getRegularUserName(); // Set up two S3 clients, one regular and one express, // and two buckets, one regular and one directory. setupClientsAndBuckets(expressUserName, regularUserName); // Create an S3 session for the express S3 client and add objects to the buckets. logger.info("Now let's add some objects to our buckets and demonstrate how to work with S3 Sessions."); waitForInputToContinue(scanner); String bucketObject = createSessionAddObjects(); // Demonstrate performance differences between regular and directory buckets. demonstratePerformance(bucketObject); // Populate the buckets to show the lexicographical difference between // regular and express buckets. showLexicographicalDifferences(bucketObject); logger.info(DASHES); logger.info("That's it for our tour of the basic operations for S3 Express One Zone."); logger.info("Would you like to cleanUp the AWS resources? (y/n): "); String response = scanner.next().trim().toLowerCase(); if (response.equals("y")) { cleanUp(stackName); } } /* Delete resources created by this scenario. */ public static void cleanUp(String stackName) { try { if (mdirectoryBucketName != null) { s3DirectoriesActions.deleteBucketAndObjectsAsync(mS3ExpressClient, mdirectoryBucketName).join(); } logger.info("Deleted directory bucket " + mdirectoryBucketName); mdirectoryBucketName = null; if (mregularBucketName != null) { s3DirectoriesActions.deleteBucketAndObjectsAsync(mS3RegularClient, mregularBucketName).join(); } } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof S3Exception) { logger.error("S3Exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } } logger.info("Deleted regular bucket " + mregularBucketName); mregularBucketName = null; CloudFormationHelper.destroyCloudFormationStack(stackName); } private static void showLexicographicalDifferences(String bucketObject) { logger.info(DASHES); logger.info(""" 7. Populate the buckets to show the lexicographical (alphabetical) difference when object names are listed. Now let's explore how directory buckets store objects in a different manner to regular buckets. The key is in the name "Directory". Where regular buckets store their key/value pairs in a flat manner, directory buckets use actual directories/folders. This allows for more rapid indexing, traversing, and therefore retrieval times! The more segmented your bucket is, with lots of directories, sub-directories, and objects, the more efficient it becomes. This structural difference also causes `ListObject` operations to behave differently, which can cause unexpected results. Let's add a few more objects in sub-directories to see how the output of ListObjects changes. """); waitForInputToContinue(scanner); // Populate a few more files in each bucket so that we can use // ListObjects and show the difference. String otherObject = "other/" + bucketObject; String altObject = "alt/" + bucketObject; String otherAltObject = "other/alt/" + bucketObject; try { s3DirectoriesActions.putObjectAsync(mS3RegularClient, mregularBucketName, otherObject, "").join(); s3DirectoriesActions.putObjectAsync(mS3ExpressClient, mdirectoryBucketName, otherObject, "").join(); s3DirectoriesActions.putObjectAsync(mS3RegularClient, mregularBucketName, altObject, "").join(); s3DirectoriesActions.putObjectAsync(mS3ExpressClient, mdirectoryBucketName, altObject, "").join(); s3DirectoriesActions.putObjectAsync(mS3RegularClient, mregularBucketName, otherAltObject, "").join(); s3DirectoriesActions.putObjectAsync(mS3ExpressClient, mdirectoryBucketName, otherAltObject, "").join(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof NoSuchBucketException) { logger.error("S3Exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } try { // List objects in both S3 buckets. List<String> dirBucketObjects = s3DirectoriesActions.listObjectsAsync(mS3ExpressClient, mdirectoryBucketName).join(); List<String> regBucketObjects = s3DirectoriesActions.listObjectsAsync(mS3RegularClient, mregularBucketName).join(); logger.info("Directory bucket content"); for (String obj : dirBucketObjects) { logger.info(obj); } logger.info("Regular bucket content"); for (String obj : regBucketObjects) { logger.info(obj); } } catch (CompletionException e) { logger.error("Async operation failed: {} ", e.getCause().getMessage()); return; } logger.info(""" Notice how the regular bucket lists objects in lexicographical order, while the directory bucket does not. This is because the regular bucket considers the whole "key" to be the object identifier, while the directory bucket actually creates directories and uses the object "key" as a path to the object. """); waitForInputToContinue(scanner); } /** * Demonstrates the performance difference between downloading an object from a directory bucket and a regular bucket. * * <p>This method: * <ul> * <li>Prompts the user to choose the number of downloads (default is 1,000).</li> * <li>Downloads the specified object from the directory bucket and measures the total time.</li> * <li>Downloads the same object from the regular bucket and measures the total time.</li> * <li>Compares the time differences and prints the results.</li> * </ul> * * <p>Note: The performance difference will be more pronounced if this example is run on an EC2 instance * in the same Availability Zone as the buckets. * * @param bucketObject the name of the object to download */ private static void demonstratePerformance(String bucketObject) { logger.info(DASHES); logger.info("6. Demonstrate the performance difference."); logger.info(""" Now, let's do a performance test. We'll download the same object from each bucket repeatedly and compare the total time needed. Note: the performance difference will be much more pronounced if this example is run in an EC2 instance in the same Availability Zone as the bucket. """); waitForInputToContinue(scanner); int downloads = 1000; // Default value. logger.info("The default number of downloads of the same object for this example is set at " + downloads + "."); // Ask if the user wants to download a different number. logger.info("Would you like to download the file a different number of times? (y/n): "); String response = scanner.next().trim().toLowerCase(); if (response.equals("y")) { int maxDownloads = 1_000_000; // Ask for a valid number of downloads. while (true) { logger.info("Enter a number between 1 and " + maxDownloads + " for the number of downloads: "); if (scanner.hasNextInt()) { downloads = scanner.nextInt(); if (downloads >= 1 && downloads <= maxDownloads) { break; } else { logger.info("Please enter a number between 1 and " + maxDownloads + "."); } } else { logger.info("Invalid input. Please enter a valid integer."); scanner.next(); } } logger.info("You have chosen to download {} items.", downloads); } else { logger.info("No changes made. Using default downloads: {}", downloads); } // Simulating the download process for the directory bucket. logger.info("Downloading from the directory bucket."); long directoryTimeStart = System.nanoTime(); for (int index = 0; index < downloads; index++) { if (index % 50 == 0) { logger.info("Download " + index + " of " + downloads); } try { // Get the object from the directory bucket. s3DirectoriesActions.getObjectAsync(mS3ExpressClient, mdirectoryBucketName, bucketObject).join(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof NoSuchKeyException) { logger.error("S3Exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } } long directoryTimeDifference = System.nanoTime() - directoryTimeStart; // Download from the regular bucket. logger.info("Downloading from the regular bucket."); long normalTimeStart = System.nanoTime(); for (int index = 0; index < downloads; index++) { if (index % 50 == 0) { logger.info("Download " + index + " of " + downloads); } try { s3DirectoriesActions.getObjectAsync(mS3RegularClient, mregularBucketName, bucketObject).join(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof NoSuchKeyException) { logger.error("S3Exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } } long normalTimeDifference = System.nanoTime() - normalTimeStart; logger.info("The directory bucket took " + directoryTimeDifference + " nanoseconds, while the regular bucket took " + normalTimeDifference + " nanoseconds."); long difference = normalTimeDifference - directoryTimeDifference; logger.info("That's a difference of " + difference + " nanoseconds, or"); logger.info(difference / 1_000_000_000.0 + " seconds."); if (difference < 0) { logger.info("The directory buckets were slower. This can happen if you are not running on the cloud within a VPC."); } waitForInputToContinue(scanner); } private static String createSessionAddObjects() { logger.info(DASHES); logger.info(""" 5. Create an object and copy it. We'll create an object consisting of some text and upload it to the regular bucket. """); waitForInputToContinue(scanner); String bucketObject = "basic-text-object.txt"; try { s3DirectoriesActions.putObjectAsync(mS3RegularClient, mregularBucketName, bucketObject, "Look Ma, I'm a bucket!").join(); s3DirectoriesActions.createSessionAsync(mS3ExpressClient, mdirectoryBucketName).join(); // Copy the object to the destination S3 bucket. s3DirectoriesActions.copyObjectAsync(mS3ExpressClient, mregularBucketName, bucketObject, mdirectoryBucketName, bucketObject).join(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof S3Exception) { logger.error("S3Exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } } logger.info(""" It worked! This is because the S3Client that performed the copy operation is the expressClient using the credentials for the user with permission to work with directory buckets. It's important to remember the user permissions when interacting with directory buckets. Instead of validating permissions on every call as regular buckets do, directory buckets utilize the user credentials and session token to validate. This allows for much faster connection speeds on every call. For single calls, this is low, but for many concurrent calls this adds up to a lot of time saved. """); waitForInputToContinue(scanner); return bucketObject; } /** * Creates VPC users for the S3 Express One Zone scenario. * <p> * This method performs the following steps: * <ol> * <li>Optionally creates a new VPC and VPC Endpoint if the application is running in an EC2 instance in the same Availability Zone as the directory buckets.</li> * <li>Creates two IAM users: one with S3 Express One Zone permissions and one without.</li> * </ol> * * @return a {@link UserNames} object containing the names of the created IAM users */ public static UserNames createVpcUsers() { /* Optionally create a VPC. Create two IAM users, one with S3 Express One Zone permissions and one without. */ logger.info(DASHES); logger.info(""" 1. First, we'll set up a new VPC and VPC Endpoint if this program is running in an EC2 instance in the same AZ as your\s directory buckets will be. Are you running this in an EC2 instance located in the same AZ as your intended directory buckets? """); logger.info("Do you want to setup a VPC Endpoint? (y/n)"); String endpointAns = scanner.nextLine().trim(); if (endpointAns.equalsIgnoreCase("y")) { logger.info(""" Great! Let's set up a VPC, retrieve the Route Table from it, and create a VPC Endpoint to connect the S3 Client to. """); try { s3DirectoriesActions.setupVPCAsync().join(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof Ec2Exception) { logger.error("IamException occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } } waitForInputToContinue(scanner); } else { logger.info("Skipping the VPC setup. Don't forget to use this in production!"); } logger.info(DASHES); logger.info(""" 2. Create a RegularUser and ExpressUser by using the AWS CDK. One IAM User, named RegularUser, will have permissions to work only with regular buckets and one IAM user, named ExpressUser, will have permissions to work only with directory buckets. """); waitForInputToContinue(scanner); // Create two users required for this scenario. Map<String, String> stackOutputs = createUsersUsingCDK(stackName); regularUser = stackOutputs.get("RegularUser"); expressUser = stackOutputs.get("ExpressUser"); UserNames names = new UserNames(); names.setRegularUserName(regularUser); names.setExpressUserName(expressUser); return names; } /** * Creates users using AWS CloudFormation. * * @return a {@link Map} of String keys and String values representing the stack outputs, * which may include user-related information such as user names and IDs. */ public static Map<String, String> createUsersUsingCDK(String stackName) { logger.info("We'll use an AWS CloudFormation template to create the IAM users and policies."); CloudFormationHelper.deployCloudFormationStack(stackName); return CloudFormationHelper.getStackOutputsAsync(stackName).join(); } /** * Sets up the necessary clients and buckets for the S3 Express service. * * @param expressUserName the username for the user with S3 Express permissions * @param regularUserName the username for the user with regular S3 permissions */ public static void setupClientsAndBuckets(String expressUserName, String regularUserName) { Scanner locscanner = new Scanner(System.in); String accessKeyIdforRegUser; String secretAccessforRegUser; try { CreateAccessKeyResponse keyResponse = s3DirectoriesActions.createAccessKeyAsync(regularUserName).join(); accessKeyIdforRegUser = keyResponse.accessKey().accessKeyId(); secretAccessforRegUser = keyResponse.accessKey().secretAccessKey(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof IamException) { logger.error("IamException occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } String accessKeyIdforExpressUser; String secretAccessforExpressUser; try { CreateAccessKeyResponse keyResponseExpress = s3DirectoriesActions.createAccessKeyAsync(expressUserName).join(); accessKeyIdforExpressUser = keyResponseExpress.accessKey().accessKeyId(); secretAccessforExpressUser = keyResponseExpress.accessKey().secretAccessKey(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof IamException) { logger.error("IamException occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } logger.info(DASHES); logger.info(""" 3. Create two S3Clients; one uses the ExpressUser's credentials and one uses the RegularUser's credentials. The 2 S3Clients will use different credentials. """); waitForInputToContinue(locscanner); try { mS3RegularClient = createS3ClientWithAccessKeyAsync(accessKeyIdforRegUser, secretAccessforRegUser).join(); mS3ExpressClient = createS3ClientWithAccessKeyAsync(accessKeyIdforExpressUser, secretAccessforExpressUser).join(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof IllegalArgumentException) { logger.error("An invalid argument exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } logger.info(""" We can now use the ExpressUser client to make calls to S3 Express operations. """); waitForInputToContinue(locscanner); logger.info(DASHES); logger.info(""" 4. Create two buckets. Now we will create a directory bucket which is the linchpin of the S3 Express One Zone service. Directory buckets behave differently from regular S3 buckets which we will explore here. We'll also create a regular bucket, put an object into the regular bucket, and copy it to the directory bucket. """); logger.info(""" Now, let's choose an availability zone (AZ) for the directory bucket. We'll choose one that is supported. """); String zoneId; String regularBucketName; try { zoneId = s3DirectoriesActions.selectAvailabilityZoneIdAsync().join(); regularBucketName = "reg-bucket-" + System.currentTimeMillis(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof Ec2Exception) { logger.error("EC2Exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } logger.info(""" Now, let's create the actual directory bucket, as well as a regular bucket." """); String directoryBucketName = "test-bucket-" + System.currentTimeMillis() + "--" + zoneId + "--x-s3"; try { s3DirectoriesActions.createDirectoryBucketAsync(mS3ExpressClient, directoryBucketName, zoneId).join(); logger.info("Created directory bucket {}", directoryBucketName); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof BucketAlreadyExistsException) { logger.error("The bucket already exists. Moving on: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); return; } } // Assign to the data member. mdirectoryBucketName = directoryBucketName; try { s3DirectoriesActions.createBucketAsync(mS3RegularClient, regularBucketName).join(); logger.info("Created regular bucket {} ", regularBucketName); mregularBucketName = regularBucketName; } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof BucketAlreadyExistsException) { logger.error("The bucket already exists. Moving on: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); return; } } logger.info("Great! Both buckets were created."); waitForInputToContinue(locscanner); } /** * Creates an asynchronous S3 client with the specified access key and secret access key. * * @param accessKeyId the AWS access key ID * @param secretAccessKey the AWS secret access key * @return a {@link CompletableFuture} that asynchronously creates the S3 client * @throws IllegalArgumentException if the access key ID or secret access key is null */ public static CompletableFuture<S3AsyncClient> createS3ClientWithAccessKeyAsync(String accessKeyId, String secretAccessKey) { return CompletableFuture.supplyAsync(() -> { // Validate input parameters if (accessKeyId == null || accessKeyId.isBlank() || secretAccessKey == null || secretAccessKey.isBlank()) { throw new IllegalArgumentException("Access Key ID and Secret Access Key must not be null or empty"); } AwsBasicCredentials awsCredentials = AwsBasicCredentials.create(accessKeyId, secretAccessKey); return S3AsyncClient.builder() .credentialsProvider(StaticCredentialsProvider.create(awsCredentials)) .region(Region.US_WEST_2) .build(); }); } private static void waitForInputToContinue(Scanner scanner) { while (true) { logger.info(""); logger.info("Enter 'c' followed by <ENTER> to continue:"); String input = scanner.nextLine(); if (input.trim().equalsIgnoreCase("c")) { logger.info("Continuing with the program..."); logger.info(""); break; } else { logger.info("Invalid input. Please try again."); } } } }

A wrapper class for Amazon S3 SDK methods.

public class S3DirectoriesActions { private static IamAsyncClient iamAsyncClient; private static Ec2AsyncClient ec2AsyncClient; private static final Logger logger = LoggerFactory.getLogger(S3DirectoriesActions.class); private static IamAsyncClient getIAMAsyncClient() { if (iamAsyncClient == null) { SdkAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder() .maxConcurrency(100) .connectionTimeout(Duration.ofSeconds(60)) .readTimeout(Duration.ofSeconds(60)) .writeTimeout(Duration.ofSeconds(60)) .build(); ClientOverrideConfiguration overrideConfig = ClientOverrideConfiguration.builder() .apiCallTimeout(Duration.ofMinutes(2)) .apiCallAttemptTimeout(Duration.ofSeconds(90)) .retryStrategy(RetryMode.STANDARD) .build(); iamAsyncClient = IamAsyncClient.builder() .httpClient(httpClient) .overrideConfiguration(overrideConfig) .build(); } return iamAsyncClient; } private static Ec2AsyncClient getEc2AsyncClient() { if (ec2AsyncClient == null) { SdkAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder() .maxConcurrency(100) .connectionTimeout(Duration.ofSeconds(60)) .readTimeout(Duration.ofSeconds(60)) .writeTimeout(Duration.ofSeconds(60)) .build(); ClientOverrideConfiguration overrideConfig = ClientOverrideConfiguration.builder() .apiCallTimeout(Duration.ofMinutes(2)) .apiCallAttemptTimeout(Duration.ofSeconds(90)) .retryStrategy(RetryMode.STANDARD) .build(); ec2AsyncClient = Ec2AsyncClient.builder() .httpClient(httpClient) .region(Region.US_WEST_2) .overrideConfiguration(overrideConfig) .build(); } return ec2AsyncClient; } /** * Deletes the specified S3 bucket and all the objects within it asynchronously. * * @param s3AsyncClient the S3 asynchronous client to use for the operations * @param bucketName the name of the S3 bucket to be deleted * @return a {@link CompletableFuture} that completes with a {@link WaiterResponse} containing the * {@link HeadBucketResponse} when the bucket has been successfully deleted * @throws CompletionException if there was an error deleting the bucket or its objects */ public CompletableFuture<WaiterResponse<HeadBucketResponse>> deleteBucketAndObjectsAsync(S3AsyncClient s3AsyncClient, String bucketName) { ListObjectsV2Request listRequest = ListObjectsV2Request.builder() .bucket(bucketName) .build(); return s3AsyncClient.listObjectsV2(listRequest) .thenCompose(listResponse -> { if (!listResponse.contents().isEmpty()) { List<ObjectIdentifier> objectIdentifiers = listResponse.contents().stream() .map(s3Object -> ObjectIdentifier.builder().key(s3Object.key()).build()) .collect(Collectors.toList()); DeleteObjectsRequest deleteRequest = DeleteObjectsRequest.builder() .bucket(bucketName) .delete(Delete.builder().objects(objectIdentifiers).build()) .build(); return s3AsyncClient.deleteObjects(deleteRequest) .thenAccept(deleteResponse -> { if (!deleteResponse.errors().isEmpty()) { deleteResponse.errors().forEach(error -> logger.error("Couldn't delete object " + error.key() + ". Reason: " + error.message())); } }); } return CompletableFuture.completedFuture(null); }) .thenCompose(ignored -> { DeleteBucketRequest deleteBucketRequest = DeleteBucketRequest.builder() .bucket(bucketName) .build(); return s3AsyncClient.deleteBucket(deleteBucketRequest); }) .thenCompose(ignored -> { S3AsyncWaiter waiter = s3AsyncClient.waiter(); HeadBucketRequest headBucketRequest = HeadBucketRequest.builder().bucket(bucketName).build(); return waiter.waitUntilBucketNotExists(headBucketRequest); }) .whenComplete((ignored, exception) -> { if (exception != null) { Throwable cause = exception.getCause(); if (cause instanceof S3Exception) { throw new CompletionException("Error deleting bucket: " + bucketName, cause); } throw new CompletionException("Failed to delete bucket and objects: " + bucketName, exception); } logger.info("Bucket deleted successfully: " + bucketName); }); } /** * Lists the objects in an S3 bucket asynchronously. * * @param s3Client the S3 async client to use for the operation * @param bucketName the name of the S3 bucket containing the objects to list * @return a {@link CompletableFuture} that contains the list of object keys in the specified bucket */ public CompletableFuture<List<String>> listObjectsAsync(S3AsyncClient s3Client, String bucketName) { ListObjectsV2Request request = ListObjectsV2Request.builder() .bucket(bucketName) .build(); return s3Client.listObjectsV2(request) .thenApply(response -> response.contents().stream() .map(S3Object::key) .toList()) .whenComplete((result, exception) -> { if (exception != null) { throw new CompletionException("Couldn't list objects in bucket: " + bucketName, exception); } }); } /** * Retrieves an object from an Amazon S3 bucket asynchronously. * * @param s3Client the S3 async client to use for the operation * @param bucketName the name of the S3 bucket containing the object * @param keyName the unique identifier (key) of the object to retrieve * @return a {@link CompletableFuture} that, when completed, contains the object's content as a {@link ResponseBytes} of {@link GetObjectResponse} */ public CompletableFuture<ResponseBytes<GetObjectResponse>> getObjectAsync(S3AsyncClient s3Client, String bucketName, String keyName) { GetObjectRequest objectRequest = GetObjectRequest.builder() .key(keyName) .bucket(bucketName) .build(); // Get the object asynchronously and transform it into a byte array return s3Client.getObject(objectRequest, AsyncResponseTransformer.toBytes()) .exceptionally(exception -> { Throwable cause = exception.getCause(); if (cause instanceof NoSuchKeyException) { throw new CompletionException("Failed to get the object. Reason: " + ((S3Exception) cause).awsErrorDetails().errorMessage(), cause); } throw new CompletionException("Failed to get the object", exception); }); } /** * Asynchronously copies an object from one S3 bucket to another. * * @param s3Client the S3 async client to use for the copy operation * @param sourceBucket the name of the source bucket * @param sourceKey the key of the object to be copied in the source bucket * @param destinationBucket the name of the destination bucket * @param destinationKey the key of the copied object in the destination bucket * @return a {@link CompletableFuture} that completes when the copy operation is finished */ public CompletableFuture<Void> copyObjectAsync(S3AsyncClient s3Client, String sourceBucket, String sourceKey, String destinationBucket, String destinationKey) { CopyObjectRequest copyRequest = CopyObjectRequest.builder() .sourceBucket(sourceBucket) .sourceKey(sourceKey) .destinationBucket(destinationBucket) .destinationKey(destinationKey) .build(); return s3Client.copyObject(copyRequest) .thenRun(() -> logger.info("Copied object '" + sourceKey + "' from bucket '" + sourceBucket + "' to bucket '" + destinationBucket + "'")) .whenComplete((ignored, exception) -> { if (exception != null) { Throwable cause = exception.getCause(); if (cause instanceof S3Exception) { throw new CompletionException("Couldn't copy object '" + sourceKey + "' from bucket '" + sourceBucket + "' to bucket '" + destinationBucket + "'. Reason: " + ((S3Exception) cause).awsErrorDetails().errorMessage(), cause); } throw new CompletionException("Failed to copy object", exception); } }); } /** * Asynchronously creates a session for the specified S3 bucket. * * @param s3Client the S3 asynchronous client to use for creating the session * @param bucketName the name of the S3 bucket for which to create the session * @return a {@link CompletableFuture} that completes when the session is created, or throws a {@link CompletionException} if an error occurs */ public CompletableFuture<CreateSessionResponse> createSessionAsync(S3AsyncClient s3Client, String bucketName) { CreateSessionRequest request = CreateSessionRequest.builder() .bucket(bucketName) .build(); return s3Client.createSession(request) .whenComplete((response, exception) -> { if (exception != null) { Throwable cause = exception.getCause(); if (cause instanceof S3Exception) { throw new CompletionException("Couldn't create the session. Reason: " + ((S3Exception) cause).awsErrorDetails().errorMessage(), cause); } throw new CompletionException("Unexpected error occurred while creating session", exception); } logger.info("Created session for bucket: " + bucketName); }); } /** * Creates a new S3 directory bucket in a specified Zone (For example, a * specified Availability Zone in this code example). * * @param s3Client The asynchronous S3 client used to create the bucket * @param bucketName The name of the bucket to be created * @param zone The Availability Zone where the bucket will be created * @throws CompletionException if there's an error creating the bucket */ public CompletableFuture<CreateBucketResponse> createDirectoryBucketAsync(S3AsyncClient s3Client, String bucketName, String zone) { logger.info("Creating bucket: " + bucketName); CreateBucketConfiguration bucketConfiguration = CreateBucketConfiguration.builder() .location(LocationInfo.builder() .type(LocationType.AVAILABILITY_ZONE) .name(zone) .build()) .bucket(BucketInfo.builder() .type(BucketType.DIRECTORY) .dataRedundancy(DataRedundancy.SINGLE_AVAILABILITY_ZONE) .build()) .build(); CreateBucketRequest bucketRequest = CreateBucketRequest.builder() .bucket(bucketName) .createBucketConfiguration(bucketConfiguration) .build(); return s3Client.createBucket(bucketRequest) .whenComplete((response, exception) -> { if (exception != null) { Throwable cause = exception.getCause(); if (cause instanceof BucketAlreadyExistsException) { throw new CompletionException("The bucket already exists: " + ((S3Exception) cause).awsErrorDetails().errorMessage(), cause); } throw new CompletionException("Unexpected error occurred while creating bucket", exception); } logger.info("Bucket created successfully with location: " + response.location()); }); } /** * Creates an S3 bucket asynchronously. * * @param s3Client the S3 async client to use for the bucket creation * @param bucketName the name of the S3 bucket to create * @return a {@link CompletableFuture} that completes with the {@link WaiterResponse} containing the {@link HeadBucketResponse} * when the bucket is successfully created * @throws CompletionException if there's an error creating the bucket */ public CompletableFuture<WaiterResponse<HeadBucketResponse>> createBucketAsync(S3AsyncClient s3Client, String bucketName) { CreateBucketRequest bucketRequest = CreateBucketRequest.builder() .bucket(bucketName) .build(); return s3Client.createBucket(bucketRequest) .thenCompose(response -> { S3AsyncWaiter s3Waiter = s3Client.waiter(); HeadBucketRequest bucketRequestWait = HeadBucketRequest.builder() .bucket(bucketName) .build(); return s3Waiter.waitUntilBucketExists(bucketRequestWait); }) .whenComplete((response, exception) -> { if (exception != null) { Throwable cause = exception.getCause(); if (cause instanceof BucketAlreadyExistsException) { throw new CompletionException("The S3 bucket exists: " + cause.getMessage(), cause); } else { throw new CompletionException("Failed to create access key: " + exception.getMessage(), exception); } } logger.info(bucketName + " is ready"); }); } /** * Uploads an object to an Amazon S3 bucket asynchronously. * * @param s3Client the S3 async client to use for the upload * @param bucketName the destination S3 bucket name * @param bucketObject the name of the object to be uploaded * @param text the content to be uploaded as the object */ public CompletableFuture<PutObjectResponse> putObjectAsync(S3AsyncClient s3Client, String bucketName, String bucketObject, String text) { PutObjectRequest objectRequest = PutObjectRequest.builder() .bucket(bucketName) .key(bucketObject) .build(); return s3Client.putObject(objectRequest, AsyncRequestBody.fromString(text)) .whenComplete((response, exception) -> { if (exception != null) { Throwable cause = exception.getCause(); if (cause instanceof NoSuchBucketException) { throw new CompletionException("The S3 bucket does not exist: " + cause.getMessage(), cause); } else { throw new CompletionException("Failed to create access key: " + exception.getMessage(), exception); } } }); } /** * Creates an AWS IAM access key asynchronously for the specified user name. * * @param userName the name of the IAM user for whom to create the access key * @return a {@link CompletableFuture} that completes with the {@link CreateAccessKeyResponse} containing the created access key */ public CompletableFuture<CreateAccessKeyResponse> createAccessKeyAsync(String userName) { CreateAccessKeyRequest request = CreateAccessKeyRequest.builder() .userName(userName) .build(); return getIAMAsyncClient().createAccessKey(request) .whenComplete((response, exception) -> { if (response != null) { logger.info("Access Key Created."); } else { if (exception == null) { Throwable cause = exception.getCause(); if (cause instanceof IamException) { throw new CompletionException("IAM error while creating access key: " + cause.getMessage(), cause); } else { throw new CompletionException("Failed to create access key: " + exception.getMessage(), exception); } } } }); } /** * Asynchronously selects an Availability Zone ID from the available EC2 zones. * * @return A {@link CompletableFuture} that resolves to the selected Availability Zone ID. * @throws CompletionException if an error occurs during the request or processing. */ public CompletableFuture<String> selectAvailabilityZoneIdAsync() { DescribeAvailabilityZonesRequest zonesRequest = DescribeAvailabilityZonesRequest.builder() .build(); return getEc2AsyncClient().describeAvailabilityZones(zonesRequest) .thenCompose(response -> { List<AvailabilityZone> zonesList = response.availabilityZones(); if (zonesList.isEmpty()) { logger.info("No availability zones found."); return CompletableFuture.completedFuture(null); // Return null if no zones are found } List<String> zoneIds = zonesList.stream() .map(AvailabilityZone::zoneId) // Get the zoneId (e.g., "usw2-az1") .toList(); return CompletableFuture.supplyAsync(() -> promptUserForZoneSelection(zonesList, zoneIds)) .thenApply(selectedZone -> { // Return only the selected Zone ID (e.g., "usw2-az1"). return selectedZone.zoneId(); }); }) .whenComplete((result, exception) -> { if (exception == null) { if (result != null) { logger.info("Selected Availability Zone ID: " + result); } else { logger.info("No availability zone selected."); } } else { Throwable cause = exception.getCause(); if (cause instanceof Ec2Exception) { throw new CompletionException("EC2 error while selecting availability zone: " + cause.getMessage(), cause); } throw new CompletionException("Failed to select availability zone: " + exception.getMessage(), exception); } }); } /** * Prompts the user to select an Availability Zone from the given list. * * @param zonesList the list of Availability Zones * @param zoneIds the list of zone IDs * @return the selected Availability Zone */ private static AvailabilityZone promptUserForZoneSelection(List<AvailabilityZone> zonesList, List<String> zoneIds) { Scanner scanner = new Scanner(System.in); int index = -1; while (index < 0 || index >= zoneIds.size()) { logger.info("Select an availability zone:"); IntStream.range(0, zoneIds.size()).forEach(i -> logger.info(i + ": " + zoneIds.get(i)) ); logger.info("Enter the number corresponding to your choice: "); if (scanner.hasNextInt()) { index = scanner.nextInt(); } else { scanner.next(); } } AvailabilityZone selectedZone = zonesList.get(index); logger.info("You selected: " + selectedZone.zoneId()); return selectedZone; } /** * Asynchronously sets up a new VPC, including creating the VPC, finding the associated route table, and * creating a VPC endpoint for the S3 service. * * @return a {@link CompletableFuture} that, when completed, contains a AbstractMap with the * VPC ID and VPC endpoint ID. */ public CompletableFuture<AbstractMap.SimpleEntry<String, String>> setupVPCAsync() { String cidr = "10.0.0.0/16"; CreateVpcRequest vpcRequest = CreateVpcRequest.builder() .cidrBlock(cidr) .build(); return getEc2AsyncClient().createVpc(vpcRequest) .thenCompose(vpcResponse -> { String vpcId = vpcResponse.vpc().vpcId(); logger.info("VPC Created: {}", vpcId); Ec2AsyncWaiter waiter = getEc2AsyncClient().waiter(); DescribeVpcsRequest request = DescribeVpcsRequest.builder() .vpcIds(vpcId) .build(); return waiter.waitUntilVpcAvailable(request) .thenApply(waiterResponse -> vpcId); }) .thenCompose(vpcId -> { Filter filter = Filter.builder() .name("vpc-id") .values(vpcId) .build(); DescribeRouteTablesRequest describeRouteTablesRequest = DescribeRouteTablesRequest.builder() .filters(filter) .build(); return getEc2AsyncClient().describeRouteTables(describeRouteTablesRequest) .thenApply(routeTablesResponse -> { if (routeTablesResponse.routeTables().isEmpty()) { throw new CompletionException("No route tables found for VPC: " + vpcId, null); } String routeTableId = routeTablesResponse.routeTables().get(0).routeTableId(); logger.info("Route table found: {}", routeTableId); return new AbstractMap.SimpleEntry<>(vpcId, routeTableId); }); }) .thenCompose(vpcAndRouteTable -> { String vpcId = vpcAndRouteTable.getKey(); String routeTableId = vpcAndRouteTable.getValue(); Region region = getEc2AsyncClient().serviceClientConfiguration().region(); String serviceName = String.format("com.amazonaws.%s.s3express", region.id()); CreateVpcEndpointRequest endpointRequest = CreateVpcEndpointRequest.builder() .vpcId(vpcId) .routeTableIds(routeTableId) .serviceName(serviceName) .build(); return getEc2AsyncClient().createVpcEndpoint(endpointRequest) .thenApply(vpcEndpointResponse -> { String vpcEndpointId = vpcEndpointResponse.vpcEndpoint().vpcEndpointId(); logger.info("VPC Endpoint created: {}", vpcEndpointId); return new AbstractMap.SimpleEntry<>(vpcId, vpcEndpointId); }); }) .exceptionally(exception -> { Throwable cause = exception.getCause() != null ? exception.getCause() : exception; if (cause instanceof Ec2Exception) { logger.error("EC2 error during VPC setup: {}", cause.getMessage(), cause); throw new CompletionException("EC2 error during VPC setup: " + cause.getMessage(), cause); } logger.error("VPC setup failed: {}", cause.getMessage(), cause); throw new CompletionException("VPC setup failed: " + cause.getMessage(), cause); }); } }
PHP
SDK for PHP
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Run a scenario demonstrating the basics of Amazon S3 directory buckets and S3 Express One Zone.

echo "\n"; echo "--------------------------------------\n"; echo "Welcome to the Amazon S3 Express Basics demo using PHP!\n"; echo "--------------------------------------\n"; // Change these both of these values to use a different region/availability zone. $region = "us-west-2"; $az = "usw2-az1"; $this->s3Service = new S3Service(new S3Client(['region' => $region])); $this->iamService = new IAMService(new IamClient(['region' => $region])); $uuid = uniqid(); echo <<<INTRO Let's get started! First, please note that S3 Express One Zone works best when working within the AWS infrastructure, specifically when working in the same Availability Zone. To see the best results in this example, and when you implement Directory buckets into your infrastructure, it is best to put your Compute resources in the same AZ as your Directory bucket.\n INTRO; pressEnter(); // 1. Configure a gateway VPC endpoint. This is the recommended method to allow S3 Express One Zone traffic without // the need to pass through an internet gateway or NAT device. echo "\n"; echo "1. First, we'll set up a new VPC and VPC Endpoint if this program is running in an EC2 instance in the same AZ as your Directory buckets will be.\n"; $ec2Choice = testable_readline("Are you running this in an EC2 instance located in the same AZ as your intended Directory buckets? Enter Y/y to setup a VPC Endpoint, or N/n/blank to skip this section."); if($ec2Choice == "Y" || $ec2Choice == "y") { echo "Great! Let's set up a VPC, retrieve the Route Table from it, and create a VPC Endpoint to connect the S3 Client to.\n"; pressEnter(); $this->ec2Service = new EC2Service(new Ec2Client(['region' => $region])); $cidr = "10.0.0.0/16"; $vpc = $this->ec2Service->createVpc($cidr); $this->resources['vpcId'] = $vpc['VpcId']; $this->ec2Service->waitForVpcAvailable($vpc['VpcId']); $routeTable = $this->ec2Service->describeRouteTables([], [ [ 'Name' => "vpc-id", 'Values' => [$vpc['VpcId']], ], ]); $serviceName = "com.amazonaws." . $this->ec2Service->getRegion() . ".s3express"; $vpcEndpoint = $this->ec2Service->createVpcEndpoint($serviceName, $vpc['VpcId'], [$routeTable[0]]); $this->resources['vpcEndpointId'] = $vpcEndpoint['VpcEndpointId']; }else{ echo "Skipping the VPC setup. Don't forget to use this in production!\n"; } // 2. Policies, user, and roles with CDK. echo "\n"; echo "2. Policies, users, and roles with CDK.\n"; echo "Now, we'll set up some policies, roles, and a user. This user will only have permissions to do S3 Express One Zone actions.\n"; pressEnter(); $this->cloudFormationClient = new CloudFormationClient([]); $stackName = "cfn-stack-s3-express-basics-" . uniqid(); $file = file_get_contents(__DIR__ . "/../../../../resources/cfn/s3_express_basics/s3_express_template.yml"); $result = $this->cloudFormationClient->createStack([ 'StackName' => $stackName, 'TemplateBody' => $file, 'Capabilities' => ['CAPABILITY_IAM'], ]); $waiter = $this->cloudFormationClient->getWaiter("StackCreateComplete", ['StackName' => $stackName]); try { $waiter->promise()->wait(); }catch(CloudFormationException $caught){ echo "Error waiting for the CloudFormation stack to create: {$caught->getAwsErrorMessage()}\n"; throw $caught; } $this->resources['stackName'] = $stackName; $stackInfo = $this->cloudFormationClient->describeStacks([ 'StackName' => $result['StackId'], ]); $expressUserName = ""; $regularUserName = ""; foreach($stackInfo['Stacks'][0]['Outputs'] as $output) { if ($output['OutputKey'] == "RegularUser") { $regularUserName = $output['OutputValue']; } if ($output['OutputKey'] == "ExpressUser") { $expressUserName = $output['OutputValue']; } } $regularKey = $this->iamService->createAccessKey($regularUserName); $regularCredentials = new Credentials($regularKey['AccessKeyId'], $regularKey['SecretAccessKey']); $expressKey = $this->iamService->createAccessKey($expressUserName); $expressCredentials = new Credentials($expressKey['AccessKeyId'], $expressKey['SecretAccessKey']); // 3. Create an additional client using the credentials with S3 Express permissions. echo "\n"; echo "3. Create an additional client using the credentials with S3 Express permissions.\n"; echo "This client is created with the credentials associated with the user account with the S3 Express policy attached, so it can perform S3 Express operations.\n"; pressEnter(); $s3RegularClient = new S3Client([ 'Region' => $region, 'Credentials' => $regularCredentials, ]); $s3RegularService = new S3Service($s3RegularClient); $s3ExpressClient = new S3Client([ 'Region' => $region, 'Credentials' => $expressCredentials, ]); $s3ExpressService = new S3Service($s3ExpressClient); echo "All the roles and policies were created an attached to the user. Then, a new S3 Client and Service were created using that user's credentials.\n"; echo "We can now use this client to make calls to S3 Express operations. Keeping permissions in mind (and adhering to least-privilege) is crucial to S3 Express.\n"; pressEnter(); // 4. Create two buckets. echo "\n"; echo "3. Create two buckets.\n"; echo "Now we will create a Directory bucket, which is the linchpin of the S3 Express One Zone service.\n"; echo "Directory buckets behave in different ways from regular S3 buckets, which we will explore here.\n"; echo "We'll also create a normal bucket, put an object into the normal bucket, and copy it over to the Directory bucket.\n"; pressEnter(); // Create a directory bucket. These are different from normal S3 buckets in subtle ways. $directoryBucketName = "s3-express-demo-directory-bucket-$uuid--$az--x-s3"; echo "Now, let's create the actual Directory bucket, as well as a regular bucket.\n"; pressEnter(); $s3ExpressService->createBucket($directoryBucketName, [ 'CreateBucketConfiguration' => [ 'Bucket' => [ 'Type' => "Directory", // This is what causes S3 to create a Directory bucket as opposed to a normal bucket. 'DataRedundancy' => "SingleAvailabilityZone", ], 'Location' => [ 'Name' => $az, 'Type' => "AvailabilityZone", ], ], ]); $this->resources['directoryBucketName'] = $directoryBucketName; // Create a normal bucket. $normalBucketName = "normal-bucket-$uuid"; $s3RegularService->createBucket($normalBucketName); $this->resources['normalBucketName'] = $normalBucketName; echo "Great! Both buckets were created.\n"; pressEnter(); // 5. Create an object and copy it over. echo "\n"; echo "5. Create an object and copy it over.\n"; echo "We'll create a basic object consisting of some text and upload it to the normal bucket.\n"; echo "Next, we'll copy the object into the Directory bucket using the regular client.\n"; echo "This works fine, because Copy operations are not restricted for Directory buckets.\n"; pressEnter(); $objectKey = "basic-text-object"; $s3RegularService->putObject($normalBucketName, $objectKey, $args = ['Body' => "Look Ma, I'm a bucket!"]); $this->resources['objectKey'] = $objectKey; // Create a session to access the directory bucket. The SDK Client will automatically refresh this as needed. $s3ExpressService->createSession($directoryBucketName); $s3ExpressService->copyObject($directoryBucketName, $objectKey, "$normalBucketName/$objectKey"); echo "It worked! It's important to remember the user permissions when interacting with Directory buckets.\n"; echo "Instead of validating permissions on every call as normal buckets do, Directory buckets utilize the user credentials and session token to validate.\n"; echo "This allows for much faster connection speeds on every call. For single calls, this is low, but for many concurrent calls, this adds up to a lot of time saved.\n"; pressEnter(); // 6. Demonstrate performance difference. echo "\n"; echo "6. Demonstrate performance difference.\n"; $downloads = 1000; echo "Now, let's do a performance test. We'll download the same object from each bucket $downloads times and compare the total time needed. Note: the performance difference will be much more pronounced if this example is run in an EC2 instance in the same AZ as the bucket.\n"; $downloadChoice = testable_readline("If you would like to download each object $downloads times, press enter. Otherwise, enter a custom amount and press enter."); if($downloadChoice && is_numeric($downloadChoice) && $downloadChoice < 1000000){ // A million is enough. I promise. $downloads = $downloadChoice; } // Download the object $downloads times from each bucket and time it to demonstrate the speed difference. $directoryStartTime = hrtime(true); for($i = 0; $i < $downloads; ++$i){ $s3ExpressService->getObject($directoryBucketName, $objectKey); } $directoryEndTime = hrtime(true); $directoryTimeDiff = $directoryEndTime - $directoryStartTime; $normalStartTime = hrtime(true); for($i = 0; $i < $downloads; ++$i){ $s3RegularService->getObject($normalBucketName, $objectKey); } $normalEndTime = hrtime(true); $normalTimeDiff = $normalEndTime - $normalStartTime; echo "The directory bucket took $directoryTimeDiff nanoseconds, while the normal bucket took $normalTimeDiff.\n"; echo "That's a difference of " . ($normalTimeDiff - $directoryTimeDiff) . " nanoseconds, or " . (($normalTimeDiff - $directoryTimeDiff)/1000000000) . " seconds.\n"; pressEnter(); // 7. Populate the buckets to show the lexicographical difference. echo "\n"; echo "7. Populate the buckets to show the lexicographical difference.\n"; echo "Now let's explore how Directory buckets store objects in a different manner to regular buckets.\n"; echo "The key is in the name \"Directory!\"\n"; echo "Where regular buckets store their key/value pairs in a flat manner, Directory buckets use actual directories/folders.\n"; echo "This allows for more rapid indexing, traversing, and therefore retrieval times!\n"; echo "The more segmented your bucket is, with lots of directories, sub-directories, and objects, the more efficient it becomes.\n"; echo "This structural difference also causes ListObjects to behave differently, which can cause unexpected results.\n"; echo "Let's add a few more objects with layered directories as see how the output of ListObjects changes.\n"; pressEnter(); // Populate a few more files in each bucket so that we can use ListObjects and show the difference. $otherObject = "other/$objectKey"; $altObject = "alt/$objectKey"; $otherAltObject = "other/alt/$objectKey"; $s3ExpressService->putObject($directoryBucketName, $otherObject); $s3RegularService->putObject($normalBucketName, $otherObject); $this->resources['otherObject'] = $otherObject; $s3ExpressService->putObject($directoryBucketName, $altObject); $s3RegularService->putObject($normalBucketName, $altObject); $this->resources['altObject'] = $altObject; $s3ExpressService->putObject($directoryBucketName, $otherAltObject); $s3RegularService->putObject($normalBucketName, $otherAltObject); $this->resources['otherAltObject'] = $otherAltObject; $listDirectoryBucket = $s3ExpressService->listObjects($directoryBucketName); $listNormalBucket = $s3RegularService->listObjects($normalBucketName); // Directory bucket content echo "Directory bucket content\n"; foreach($listDirectoryBucket['Contents'] as $result){ echo $result['Key'] . "\n"; } // Normal bucket content echo "\nNormal bucket content\n"; foreach($listNormalBucket['Contents'] as $result){ echo $result['Key'] . "\n"; } echo "Notice how the normal bucket lists objects in lexicographical order, while the directory bucket does not. This is because the normal bucket considers the whole \"key\" to be the object identifies, while the directory bucket actually creates directories and uses the object \"key\" as a path to the object.\n"; pressEnter(); echo "\n"; echo "That's it for our tour of the basic operations for S3 Express One Zone.\n"; $cleanUp = testable_readline("Would you like to delete all the resources created during this demo? Enter Y/y to delete all the resources."); if($cleanUp){ $this->cleanUp(); } namespace S3; use Aws\CommandInterface; use Aws\Exception\AwsException; use Aws\Result; use Aws\S3\Exception\S3Exception; use Aws\S3\S3Client; use AwsUtilities\AWSServiceClass; use DateTimeInterface; class S3Service extends AWSServiceClass { protected S3Client $client; protected bool $verbose; public function __construct(S3Client $client = null, $verbose = false) { if ($client) { $this->client = $client; } else { $this->client = new S3Client([ 'version' => 'latest', 'region' => 'us-west-2', ]); } $this->verbose = $verbose; } public function setVerbose($verbose) { $this->verbose = $verbose; } public function isVerbose(): bool { return $this->verbose; } public function getClient(): S3Client { return $this->client; } public function setClient(S3Client $client) { $this->client = $client; } public function emptyAndDeleteBucket($bucketName, array $args = []) { try { $objects = $this->listAllObjects($bucketName, $args); $this->deleteObjects($bucketName, $objects, $args); if ($this->verbose) { echo "Deleted all objects and folders from $bucketName.\n"; } $this->deleteBucket($bucketName, $args); } catch (AwsException $exception) { if ($this->verbose) { echo "Failed to delete $bucketName with error: {$exception->getMessage()}\n"; echo "\nPlease fix error with bucket deletion before continuing.\n"; } throw $exception; } } public function createBucket(string $bucketName, array $args = []) { $parameters = array_merge(['Bucket' => $bucketName], $args); try { $this->client->createBucket($parameters); if ($this->verbose) { echo "Created the bucket named: $bucketName.\n"; } } catch (AwsException $exception) { if ($this->verbose) { echo "Failed to create $bucketName with error: {$exception->getMessage()}\n"; echo "Please fix error with bucket creation before continuing."; } throw $exception; } } public function putObject(string $bucketName, string $key, array $args = []) { $parameters = array_merge(['Bucket' => $bucketName, 'Key' => $key], $args); try { $this->client->putObject($parameters); if ($this->verbose) { echo "Uploaded the object named: $key to the bucket named: $bucketName.\n"; } } catch (AwsException $exception) { if ($this->verbose) { echo "Failed to create $key in $bucketName with error: {$exception->getMessage()}\n"; echo "Please fix error with object uploading before continuing."; } throw $exception; } } public function getObject(string $bucketName, string $key, array $args = []): Result { $parameters = array_merge(['Bucket' => $bucketName, 'Key' => $key], $args); try { $object = $this->client->getObject($parameters); if ($this->verbose) { echo "Downloaded the object named: $key to the bucket named: $bucketName.\n"; } } catch (AwsException $exception) { if ($this->verbose) { echo "Failed to download $key from $bucketName with error: {$exception->getMessage()}\n"; echo "Please fix error with object downloading before continuing."; } throw $exception; } return $object; } public function copyObject($bucketName, $key, $copySource, array $args = []) { $parameters = array_merge(['Bucket' => $bucketName, 'Key' => $key, "CopySource" => $copySource], $args); try { $this->client->copyObject($parameters); if ($this->verbose) { echo "Copied the object from: $copySource in $bucketName to: $key.\n"; } } catch (AwsException $exception) { if ($this->verbose) { echo "Failed to copy $copySource in $bucketName with error: {$exception->getMessage()}\n"; echo "Please fix error with object copying before continuing."; } throw $exception; } } public function listObjects(string $bucketName, $start = 0, $max = 1000, array $args = []) { $parameters = array_merge(['Bucket' => $bucketName, 'Marker' => $start, "MaxKeys" => $max], $args); try { $objects = $this->client->listObjectsV2($parameters); if ($this->verbose) { echo "Retrieved the list of objects from: $bucketName.\n"; } } catch (AwsException $exception) { if ($this->verbose) { echo "Failed to retrieve the objects from $bucketName with error: {$exception->getMessage()}\n"; echo "Please fix error with list objects before continuing."; } throw $exception; } return $objects; } public function listAllObjects($bucketName, array $args = []) { $parameters = array_merge(['Bucket' => $bucketName], $args); $contents = []; $paginator = $this->client->getPaginator("ListObjectsV2", $parameters); foreach ($paginator as $result) { if($result['KeyCount'] == 0){ break; } foreach ($result['Contents'] as $object) { $contents[] = $object; } } return $contents; } public function deleteObjects(string $bucketName, array $objects, array $args = []) { $listOfObjects = array_map( function ($object) { return ['Key' => $object]; }, array_column($objects, 'Key') ); if(!$listOfObjects){ return; } $parameters = array_merge(['Bucket' => $bucketName, 'Delete' => ['Objects' => $listOfObjects]], $args); try { $this->client->deleteObjects($parameters); if ($this->verbose) { echo "Deleted the list of objects from: $bucketName.\n"; } } catch (AwsException $exception) { if ($this->verbose) { echo "Failed to delete the list of objects from $bucketName with error: {$exception->getMessage()}\n"; echo "Please fix error with object deletion before continuing."; } throw $exception; } } public function deleteBucket(string $bucketName, array $args = []) { $parameters = array_merge(['Bucket' => $bucketName], $args); try { $this->client->deleteBucket($parameters); if ($this->verbose) { echo "Deleted the bucket named: $bucketName.\n"; } } catch (AwsException $exception) { if ($this->verbose) { echo "Failed to delete $bucketName with error: {$exception->getMessage()}\n"; echo "Please fix error with bucket deletion before continuing."; } throw $exception; } } public function deleteObject(string $bucketName, string $fileName, array $args = []) { $parameters = array_merge(['Bucket' => $bucketName, 'Key' => $fileName], $args); try { $this->client->deleteObject($parameters); if ($this->verbose) { echo "Deleted the object named: $fileName from $bucketName.\n"; } } catch (AwsException $exception) { if ($this->verbose) { echo "Failed to delete $fileName from $bucketName with error: {$exception->getMessage()}\n"; echo "Please fix error with object deletion before continuing."; } throw $exception; } } public function listBuckets(array $args = []) { try { $buckets = $this->client->listBuckets($args); if ($this->verbose) { echo "Retrieved all " . count($buckets) . "\n"; } } catch (AwsException $exception) { if ($this->verbose) { echo "Failed to retrieve bucket list with error: {$exception->getMessage()}\n"; echo "Please fix error with bucket lists before continuing."; } throw $exception; } return $buckets; } public function preSignedUrl(CommandInterface $command, DateTimeInterface|int|string $expires, array $options = []) { $request = $this->client->createPresignedRequest($command, $expires, $options); try { $presignedUrl = (string)$request->getUri(); } catch (AwsException $exception) { if ($this->verbose) { echo "Failed to create a presigned url: {$exception->getMessage()}\n"; echo "Please fix error with presigned urls before continuing."; } throw $exception; } return $presignedUrl; } public function createSession(string $bucketName) { try{ $result = $this->client->createSession([ 'Bucket' => $bucketName, ]); return $result; }catch(S3Exception $caught){ if($caught->getAwsErrorType() == "NoSuchBucket"){ echo "The specified bucket does not exist."; } throw $caught; } } }
Python
SDK for Python (Boto3)
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Run a scenario demonstrating the basics of Amazon S3 directory buckets and S3 Express One Zone.

class S3ExpressScenario: """Runs an interactive scenario that shows how to get started with S3 Express.""" def __init__( self, cloud_formation_resource: ServiceResource, ec2_client: client, iam_client: client, ): self.cloud_formation_resource = cloud_formation_resource self.ec2_client = ec2_client self.iam_client = iam_client self.region = ec2_client.meta.region_name self.stack = None self.vpc_id = None self.vpc_endpoint_id = None self.regular_bucket_name = None self.directory_bucket_name = None self.s3_express_wrapper = None self.s3_regular_wrapper = None def s3_express_scenario(self): """ Runs the scenario. """ print("") print_dashes() print("Welcome to the Amazon S3 Express Basics demo using Python (Boto 3)!") print_dashes() print( """ Let's get started! First, please note that S3 Express One Zone works best when working within the AWS infrastructure, specifically when working in the same Availability Zone. To see the best results in this example and when you implement Directory buckets into your infrastructure, it is best to put your compute resources in the same AZ as your Directory bucket. """ ) press_enter_to_continue() # Create an optional VPC and create 2 IAM users. express_user_name, regular_user_name = self.create_vpc_and_users() # Set up two S3 clients, one regular and one express, and two buckets, one regular and one express. self.setup_clients_and_buckets(express_user_name, regular_user_name) # Create an S3 session for the express S3 client and add objects to the buckets. bucket_object = self.create_session_and_add_objects() # Demonstrate performance differences between regular and express buckets. self.demonstrate_performance(bucket_object) # Populate the buckets to show the lexicographical difference between regular and express buckets. self.show_lexicographical_differences(bucket_object) print("") print("That's it for our tour of the basic operations for S3 Express One Zone.") if q.ask( "Would you like to delete all the resources created during this demo (y/n)? ", q.is_yesno, ): self.cleanup() def create_vpc_and_users(self) -> None: """ Optionally create a VPC. Create two IAM users, one with S3 Express One Zone permissions and one without. """ # Configure a gateway VPC endpoint. This is the recommended method to allow S3 Express One Zone traffic without # the need to pass through an internet gateway or NAT device. print( """ 1. First, we'll set up a new VPC and VPC Endpoint if this program is running in an EC2 instance in the same AZ as your Directory buckets will be. Are you running this in an EC2 instance located in the same AZ as your intended Directory buckets? """ ) if q.ask("Do you want to setup a VPC Endpoint? (y/n) ", q.is_yesno): print( "Great! Let's set up a VPC, retrieve the Route Table from it, and create a VPC Endpoint to connect the S3 Client to." ) self.setup_vpc() press_enter_to_continue() else: print("Skipping the VPC setup. Don't forget to use this in production!") print( """ 2. Policies, users, and roles with CDK. Now, we'll set up some policies, roles, and a user. This user will only have permissions to do S3 Express One Zone actions. """ ) press_enter_to_continue() stack_name = f"cfn-stack-s3-express-basics--{uuid.uuid4()}" template_as_string = S3ExpressScenario.get_template_as_string() self.stack = self.deploy_cloudformation_stack(stack_name, template_as_string) regular_user_name = None express_user_name = None outputs = self.stack.outputs for output in outputs: if output.get("OutputKey") == "RegularUser": regular_user_name = output.get("OutputValue") elif output.get("OutputKey") == "ExpressUser": express_user_name = output.get("OutputValue") if not regular_user_name or not express_user_name: error_string = f""" Failed to retrieve required outputs from CloudFormation stack. 'regular_user_name'={regular_user_name}, 'express_user_name'={express_user_name} """ logger.error(error_string) raise ValueError(error_string) return express_user_name, regular_user_name def setup_clients_and_buckets( self, express_user_name: str, regular_user_name: str ) -> None: """ Set up two S3 clients, one regular and one express, and two buckets, one regular and one express. :param express_user_name: The name of the user with S3 Express permissions. :param regular_user_name: The name of the user with regular S3 permissions. """ regular_credentials = self.create_access_key(regular_user_name) express_credentials = self.create_access_key(express_user_name) # 3. Create an additional client using the credentials with S3 Express permissions. print( """ 3. Create an additional client using the credentials with S3 Express permissions. This client is created with the credentials associated with the user account with the S3 Express policy attached, so it can perform S3 Express operations. """ ) press_enter_to_continue() s3_regular_client = self.create_s3__client_with_access_key_credentials( regular_credentials ) self.s3_regular_wrapper = S3ExpressWrapper(s3_regular_client) s3_express_client = self.create_s3__client_with_access_key_credentials( express_credentials ) self.s3_express_wrapper = S3ExpressWrapper(s3_express_client) print( """ All the roles and policies were created and attached to the user. Then a new S3 Client were created using that user's credentials. We can now use this client to make calls to S3 Express operations. Keeping permissions in mind (and adhering to least-privilege) is crucial to S3 Express. """ ) press_enter_to_continue() # 4. Create two buckets. print( """ 3. Create two buckets. Now we will create a Directory bucket which is the linchpin of the S3 Express One Zone service. Directory buckets behave in different ways from regular S3 buckets which we will explore here. We'll also create a normal bucket, put an object into the normal bucket, and copy it over to the Directory bucket. """ ) # Create a directory bucket. These are different from normal S3 buckets in subtle ways. bucket_prefix = q.ask( "Enter a bucket name prefix that will be used for both buckets: ", q.re_match(r"[a-z0-9](?:[a-z0-9-\.]*)[a-z0-9]$"), ) # Some availability zones are not supported for Directory buckets. We'll choose one that is supported. print( "Now, let's choose an availability zone for the Directory bucket. We'll choose one that is supported." ) while True: availability_zone = self.select_availability_zone_id(self.region) # Construct the parts of a directory bucket name that is made unique with a UUID string. directory_bucket_suffix = f"--{availability_zone['ZoneId']}--x-s3" max_uuid_length = 63 - len(bucket_prefix) - len(directory_bucket_suffix) - 1 bucket_uuid = str(uuid.uuid4()).replace("-", "")[:max_uuid_length] directory_bucket_name = ( f"{bucket_prefix}-{bucket_uuid}{directory_bucket_suffix}" ) regular_bucket_name = f"{bucket_prefix}-regular-{bucket_uuid}" configuration = { "Bucket": { "Type": "Directory", "DataRedundancy": "SingleAvailabilityZone", }, "Location": { "Name": availability_zone["ZoneId"], "Type": "AvailabilityZone", }, } press_enter_to_continue() print( "Now, let's create the actual Directory bucket, as well as a regular bucket." ) press_enter_to_continue() try: self.s3_express_wrapper.create_bucket( directory_bucket_name, configuration ) break except ClientError as client_error: if client_error.response["Error"]["Code"] == "InvalidBucketName": print( f"Bucket '{directory_bucket_name}' is invalid. This may be because of selected availability zone." ) if q.ask( "Would you like to select a different availability zone? ", q.is_yesno, ): continue else: raise else: raise print(f"Created directory bucket, '{directory_bucket_name}'") self.directory_bucket_name = directory_bucket_name self.s3_regular_wrapper.create_bucket(regular_bucket_name) print(f"Created regular bucket, '{regular_bucket_name}'") self.regular_bucket_name = regular_bucket_name print("Great! Both buckets were created.") press_enter_to_continue() def create_session_and_add_objects(self) -> None: """ Create a session for the express S3 client and add objects to the buckets. """ print( """ 5. Create an object and copy it over. We'll create a basic object consisting of some text and upload it to the normal bucket. Next we'll copy the object into the Directory bucket using the regular client. This works fine because copy operations are not restricted for Directory buckets. """ ) press_enter_to_continue() bucket_object = "basic-text-object" self.s3_regular_wrapper.put_object( self.regular_bucket_name, bucket_object, "Look Ma, I'm a bucket!" ) self.s3_express_wrapper.create_session(self.directory_bucket_name) self.s3_express_wrapper.copy_object( self.regular_bucket_name, bucket_object, self.directory_bucket_name, bucket_object, ) print( """ It worked! It's important to remember the user permissions when interacting with Directory buckets. Instead of validating permissions on every call as normal buckets do, Directory buckets utilize the user credentials and session token to validate. This allows for much faster connection speeds on every call. For single calls, this is low, but for many concurrent calls this adds up to a lot of time saved. """ ) press_enter_to_continue() return bucket_object def demonstrate_performance(self, bucket_object: str) -> None: """ Demonstrate performance differences between regular and Directory buckets. :param bucket_object: The name of the object to download from each bucket. """ print("") print("6. Demonstrate performance difference.") print( """ Now, let's do a performance test. We'll download the same object from each bucket 'downloads' times and compare the total time needed. Note: the performance difference will be much more pronounced if this example is run in an EC2 instance in the same Availability Zone as the bucket. """ ) downloads = 1000 print( f"The number of downloads of the same object for this example is set at {downloads}." ) if q.ask("Would you like to download a different number? (y/n) ", q.is_yesno): max_downloads = 1000000 downloads = q.ask( f"Enter a number between 1 and {max_downloads} for the number of downloads: ", q.is_int, q.in_range(1, max_downloads), ) # Download the object 'downloads' times from each bucket and time it to demonstrate the speed difference. print("Downloading from the Directory bucket.") directory_time_start = time.time_ns() for index in range(downloads): if index % 10 == 0: print(f"Download {index} of {downloads}") self.s3_express_wrapper.get_object( self.directory_bucket_name, bucket_object ) directory_time_difference = time.time_ns() - directory_time_start print("Downloading from the normal bucket.") normal_time_start = time.time_ns() for index in range(downloads): if index % 10 == 0: print(f"Download {index} of {downloads}") self.s3_regular_wrapper.get_object(self.regular_bucket_name, bucket_object) normal_time_difference = time.time_ns() - normal_time_start print( f"The directory bucket took {directory_time_difference} nanoseconds, while the normal bucket took {normal_time_difference}." ) difference = normal_time_difference - directory_time_difference print(f"That's a difference of {difference} nanoseconds, or") print(f"{(difference) / 1000000000} seconds.") if difference < 0: print( "The directory buckets were slower. This can happen if you are not running on the cloud within a vpc." ) press_enter_to_continue() def show_lexicographical_differences(self, bucket_object: str) -> None: """ Show the lexicographical difference between Directory buckets and regular buckets. This is done by creating a few objects in each bucket and listing them to show the difference. :param bucket_object: The object to use for the listing operations. """ print( """ 7. Populate the buckets to show the lexicographical difference. Now let's explore how Directory buckets store objects in a different manner to regular buckets. The key is in the name "Directory". Where regular buckets store their key/value pairs in a flat manner, Directory buckets use actual directories/folders. This allows for more rapid indexing, traversing, and therefore retrieval times! The more segmented your bucket is, with lots of directories, sub-directories, and objects, the more efficient it becomes. This structural difference also causes ListObjects to behave differently, which can cause unexpected results. Let's add a few more objects with layered directories to see how the output of ListObjects changes. """ ) press_enter_to_continue() # Populate a few more files in each bucket so that we can use ListObjects and show the difference. other_object = f"other/{bucket_object}" alt_object = f"alt/{bucket_object}" other_alt_object = f"other/alt/{bucket_object}" self.s3_regular_wrapper.put_object(self.regular_bucket_name, other_object, "") self.s3_express_wrapper.put_object(self.directory_bucket_name, other_object, "") self.s3_regular_wrapper.put_object(self.regular_bucket_name, alt_object, "") self.s3_express_wrapper.put_object(self.directory_bucket_name, alt_object, "") self.s3_regular_wrapper.put_object( self.regular_bucket_name, other_alt_object, "" ) self.s3_express_wrapper.put_object( self.directory_bucket_name, other_alt_object, "" ) directory_bucket_objects = self.s3_express_wrapper.list_objects( self.directory_bucket_name ) regular_bucket_objects = self.s3_regular_wrapper.list_objects( self.regular_bucket_name ) print("Directory bucket content") for bucket_object in directory_bucket_objects: print(f" {bucket_object['Key']}") print("Normal bucket content") for bucket_object in regular_bucket_objects: print(f" {bucket_object['Key']}") print( """ Notice how the normal bucket lists objects in lexicographical order, while the directory bucket does not. This is because the normal bucket considers the whole "key" to be the object identifier, while the directory bucket actually creates directories and uses the object "key" as a path to the object. """ ) press_enter_to_continue() def cleanup(self) -> None: """ Delete resources created by this scenario. """ if self.directory_bucket_name is not None: self.s3_express_wrapper.delete_bucket_and_objects( self.directory_bucket_name ) print(f"Deleted directory bucket, '{self.directory_bucket_name}'") self.directory_bucket_name = None if self.regular_bucket_name is not None: self.s3_regular_wrapper.delete_bucket_and_objects(self.regular_bucket_name) print(f"Deleted regular bucket, '{self.regular_bucket_name}'") self.regular_bucket_name = None if self.stack is not None: self.destroy_cloudformation_stack(self.stack) self.stack = None self.tear_done_vpc() def create_access_key(self, user_name: str) -> dict[str, any]: """ Creates an access key for the user. :param user_name: The name of the user. :return: The access key for the user. """ try: access_key = self.iam_client.create_access_key(UserName=user_name) return access_key["AccessKey"] except ClientError as client_error: logging.error( "Couldn't create the access key. Here's why: %s", client_error.response["Error"]["Message"], ) raise def create_s3__client_with_access_key_credentials( self, access_key: dict[str, any] ) -> client: """ Creates an S3 client with access key credentials. :param access_key: The access key for the user. :return: The S3 Express One Zone client. """ try: s3_express_client = boto3.client( "s3", aws_access_key_id=access_key["AccessKeyId"], aws_secret_access_key=access_key["SecretAccessKey"], region_name=self.region, ) return s3_express_client except ClientError as client_error: logging.error( "Couldn't create the S3 Express One Zone client. Here's why: %s", client_error.response["Error"]["Message"], ) raise def select_availability_zone_id(self, region: str) -> dict[str, any]: """ Selects an availability zone. :param region: The region to select the availability zone from. :return: The availability zone dictionary. """ try: response = self.ec2_client.describe_availability_zones( Filters=[{"Name": "region-name", "Values": [region]}] ) availability_zones = response["AvailabilityZones"] zone_names = [zone["ZoneName"] for zone in availability_zones] index = q.choose("Select an availability zone: ", zone_names) return availability_zones[index] except ClientError as client_error: logging.error( "Couldn't describe availability zones. Here's why: %s", client_error.response["Error"]["Message"], ) raise def deploy_cloudformation_stack( self, stack_name: str, cfn_template: str ) -> ServiceResource: """ Deploys prerequisite resources used by the scenario. The resources are defined in the associated `cfn_template.yaml` AWS CloudFormation script and are deployed as a CloudFormation stack, so they can be easily managed and destroyed. :param stack_name: The name of the CloudFormation stack. :param cfn_template: The CloudFormation template as a string. :return: The CloudFormation stack resource. """ print(f"Deploying CloudFormation stack: {stack_name}.") stack = self.cloud_formation_resource.create_stack( StackName=stack_name, TemplateBody=cfn_template, Capabilities=["CAPABILITY_NAMED_IAM"], ) print(f"CloudFormation stack creation started: {stack_name}") print("Waiting for CloudFormation stack creation to complete...") waiter = self.cloud_formation_resource.meta.client.get_waiter( "stack_create_complete" ) waiter.wait(StackName=stack.name) stack.load() print("CloudFormation stack creation complete.") return stack def destroy_cloudformation_stack(self, stack: ServiceResource) -> None: """ Destroys the resources managed by the CloudFormation stack, and the CloudFormation stack itself. :param stack: The CloudFormation stack that manages the example resources. """ try: print( f"CloudFormation stack '{stack.name}' is being deleted. This may take a few minutes." ) stack.delete() waiter = self.cloud_formation_resource.meta.client.get_waiter( "stack_delete_complete" ) waiter.wait(StackName=stack.name) print(f"CloudFormation stack '{stack.name}' has been deleted.") except ClientError as client_error: logging.error( "Couldn't delete the CloudFormation stack. Here's why: %s", client_error.response["Error"]["Message"], ) @staticmethod def get_template_as_string() -> str: """ Returns a string containing this scenario's CloudFormation template. """ script_directory = os.path.dirname(os.path.abspath(__file__)) template_file_path = os.path.join(script_directory, "s3_express_template.yaml") file = open(template_file_path, "r") return file.read() def setup_vpc(self): cidr = "10.0.0.0/16" try: response = self.ec2_client.create_vpc(CidrBlock=cidr) self.vpc_id = response["Vpc"]["VpcId"] waiter = self.ec2_client.get_waiter("vpc_available") waiter.wait(VpcIds=[self.vpc_id]) print(f"Created vpc {self.vpc_id}") except ClientError as client_error: logging.error( "Couldn't create the vpc. Here's why: %s", client_error.response["Error"]["Message"], ) raise try: response = self.ec2_client.describe_route_tables( Filters=[{"Name": "vpc-id", "Values": [self.vpc_id]}] ) route_table_id = response["RouteTables"][0]["RouteTableId"] service_name = f"com.amazonaws.{self.ec2_client.meta.region_name}.s3express" response = self.ec2_client.create_vpc_endpoint( VpcId=self.vpc_id, RouteTableIds=[route_table_id], ServiceName=service_name, ) self.vpc_endpoint_id = response["VpcEndpoint"]["VpcEndpointId"] print(f"Created vpc endpoint {self.vpc_endpoint_id}") except ClientError as client_error: logging.error( "Couldn't create the vpc endpoint. Here's why: %s", client_error.response["Error"]["Message"], ) raise def tear_done_vpc(self) -> None: if self.vpc_endpoint_id is not None: try: self.ec2_client.delete_vpc_endpoints( VpcEndpointIds=[self.vpc_endpoint_id] ) print(f"Deleted vpc endpoint {self.vpc_endpoint_id}.") self.vpc_endpoint_id = None except ClientError as client_error: logging.error( "Couldn't delete the vpc endpoint %s. Here's why: %s", self.vpc_endpoint_id, client_error.response["Error"]["Message"], ) if self.vpc_id is not None: try: self.ec2_client.delete_vpc(VpcId=self.vpc_id) print(f"Deleted vpc {self.vpc_id}") self.vpc_id = None except ClientError as client_error: logging.error( "Couldn't delete the vpc %s. Here's why: %s", self.vpc_id, client_error.response["Error"]["Message"], )

A wrapper class for Amazon S3 Express SDK functions.

class S3ExpressWrapper: """Encapsulates Amazon S3 Express One Zone actions using the client interface.""" def __init__(self, s3_client: Any) -> None: """ Initializes the S3ExpressWrapper with an S3 client. :param s3_client: A Boto3 Amazon S3 client. This client provides low-level access to AWS S3 services. """ self.s3_client = s3_client @classmethod def from_client(cls) -> "S3ExpressWrapper": """ Creates an S3ExpressWrapper instance with a default s3 client. :return: An instance of S3ExpressWrapper initialized with the default S3 client. """ s3_client = boto3.client("s3") return cls(s3_client) def create_bucket( self, bucket_name: str, bucket_configuration: dict[str, any] = None ) -> None: """ Creates a bucket. :param bucket_name: The name of the bucket. :param bucket_configuration: The optional configuration for the bucket. """ try: params = {"Bucket": bucket_name} if bucket_configuration: params["CreateBucketConfiguration"] = bucket_configuration self.s3_client.create_bucket(**params) except ClientError as client_error: # Do not log InvalidBucketName error because it is logged elsewhere. if client_error.response["Error"]["Code"] != "InvalidBucketName": logging.error( "Couldn't create the bucket %s. Here's why: %s", bucket_name, client_error.response["Error"]["Message"], ) raise def delete_bucket_and_objects(self, bucket_name: str) -> None: """ Deletes a bucket and its objects. :param bucket_name: The name of the bucket. """ try: # Delete the objects in the bucket first. This is required for a bucket to be deleted. paginator = self.s3_client.get_paginator("list_objects_v2") page_iterator = paginator.paginate(Bucket=bucket_name) for page in page_iterator: if "Contents" in page: delete_keys = { "Objects": [{"Key": obj["Key"]} for obj in page["Contents"]] } response = self.s3_client.delete_objects( Bucket=bucket_name, Delete=delete_keys ) if "Errors" in response: for error in response["Errors"]: logging.error( "Couldn't delete object %s. Here's why: %s", error["Key"], error["Message"], ) self.s3_client.delete_bucket(Bucket=bucket_name) except ClientError as client_error: logging.error( "Couldn't delete the bucket %s. Here's why: %s", bucket_name, client_error.response["Error"]["Message"], ) def put_object(self, bucket_name: str, object_key: str, content: str) -> None: """ Puts an object into a bucket. :param bucket_name: The name of the bucket. :param object_key: The key of the object. :param content: The content of the object. """ try: self.s3_client.put_object(Body=content, Bucket=bucket_name, Key=object_key) except ClientError as client_error: logging.error( "Couldn't put the object %s into bucket %s. Here's why: %s", object_key, bucket_name, client_error.response["Error"]["Message"], ) raise def list_objects(self, bucket: str) -> list[str]: """ Lists objects in a bucket. :param bucket: The name of the bucket. :return: The list of objects in the bucket. """ try: response = self.s3_client.list_objects_v2(Bucket=bucket) return response.get("Contents", []) except ClientError as client_error: logging.error( "Couldn't list objects in bucket %s. Here's why: %s", bucket, client_error.response["Error"]["Message"], ) raise def copy_object( self, source_bucket: str, source_key: str, destination_bucket: str, destination_key: str, ) -> None: """ Copies an object from one bucket to another. :param source_bucket: The source bucket. :param source_key: The source key. :param destination_bucket: The destination bucket. :param destination_key: The destination key. :return: None """ try: self.s3_client.copy_object( CopySource={"Bucket": source_bucket, "Key": source_key}, Bucket=destination_bucket, Key=destination_key, ) except ClientError as client_error: logging.error( "Couldn't copy object %s from bucket %s to bucket %s. Here's why: %s", source_key, source_bucket, destination_bucket, client_error.response["Error"]["Message"], ) raise def create_session(self, bucket_name: str) -> None: """ Creates an express session. :param bucket_name: The name of the bucket. """ try: self.s3_client.create_session(Bucket=bucket_name) except ClientError as client_error: logging.error( "Couldn't create the express session for bucket %s. Here's why: %s", bucket_name, client_error.response["Error"]["Message"], ) raise def get_object(self, bucket_name: str, object_key: str) -> None: """ Gets an object from a bucket. :param bucket_name: The name of the bucket. :param object_key: The key of the object. """ try: self.s3_client.get_object(Bucket=bucket_name, Key=object_key) except ClientError as client_error: logging.error( "Couldn't get the object %s from bucket %s. Here's why: %s", object_key, bucket_name, client_error.response["Error"]["Message"], ) raise
SDK for Java 2.x
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

Run an interactive scenario demonstrating Amazon S3 features.

public class S3DirectoriesScenario { public static final String DASHES = new String(new char[80]).replace("\0", "-"); private static final Logger logger = LoggerFactory.getLogger(S3DirectoriesScenario.class); static Scanner scanner = new Scanner(System.in); private static S3AsyncClient mS3RegularClient; private static S3AsyncClient mS3ExpressClient; private static String mdirectoryBucketName; private static String mregularBucketName; private static String stackName = "cfn-stack-s3-express-basics--" + UUID.randomUUID(); private static String regularUser = ""; private static String vpcId = ""; private static String expressUser = ""; private static String vpcEndpointId = ""; private static final S3DirectoriesActions s3DirectoriesActions = new S3DirectoriesActions(); public static void main(String[] args) { try { s3ExpressScenario(); } catch (RuntimeException e) { logger.info(e.getMessage()); } } // Runs the scenario. private static void s3ExpressScenario() { logger.info(DASHES); logger.info("Welcome to the Amazon S3 Express Basics demo using AWS SDK for Java V2."); logger.info(""" Let's get started! First, please note that S3 Express One Zone works best when working within the AWS infrastructure, specifically when working in the same Availability Zone (AZ). To see the best results in this example and when you implement directory buckets into your infrastructure, it is best to put your compute resources in the same AZ as your directory bucket. """); waitForInputToContinue(scanner); logger.info(DASHES); // Create an optional VPC and create 2 IAM users. UserNames userNames = createVpcUsers(); String expressUserName = userNames.getExpressUserName(); String regularUserName = userNames.getRegularUserName(); // Set up two S3 clients, one regular and one express, // and two buckets, one regular and one directory. setupClientsAndBuckets(expressUserName, regularUserName); // Create an S3 session for the express S3 client and add objects to the buckets. logger.info("Now let's add some objects to our buckets and demonstrate how to work with S3 Sessions."); waitForInputToContinue(scanner); String bucketObject = createSessionAddObjects(); // Demonstrate performance differences between regular and directory buckets. demonstratePerformance(bucketObject); // Populate the buckets to show the lexicographical difference between // regular and express buckets. showLexicographicalDifferences(bucketObject); logger.info(DASHES); logger.info("That's it for our tour of the basic operations for S3 Express One Zone."); logger.info("Would you like to cleanUp the AWS resources? (y/n): "); String response = scanner.next().trim().toLowerCase(); if (response.equals("y")) { cleanUp(stackName); } } /* Delete resources created by this scenario. */ public static void cleanUp(String stackName) { try { if (mdirectoryBucketName != null) { s3DirectoriesActions.deleteBucketAndObjectsAsync(mS3ExpressClient, mdirectoryBucketName).join(); } logger.info("Deleted directory bucket " + mdirectoryBucketName); mdirectoryBucketName = null; if (mregularBucketName != null) { s3DirectoriesActions.deleteBucketAndObjectsAsync(mS3RegularClient, mregularBucketName).join(); } } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof S3Exception) { logger.error("S3Exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } } logger.info("Deleted regular bucket " + mregularBucketName); mregularBucketName = null; CloudFormationHelper.destroyCloudFormationStack(stackName); } private static void showLexicographicalDifferences(String bucketObject) { logger.info(DASHES); logger.info(""" 7. Populate the buckets to show the lexicographical (alphabetical) difference when object names are listed. Now let's explore how directory buckets store objects in a different manner to regular buckets. The key is in the name "Directory". Where regular buckets store their key/value pairs in a flat manner, directory buckets use actual directories/folders. This allows for more rapid indexing, traversing, and therefore retrieval times! The more segmented your bucket is, with lots of directories, sub-directories, and objects, the more efficient it becomes. This structural difference also causes `ListObject` operations to behave differently, which can cause unexpected results. Let's add a few more objects in sub-directories to see how the output of ListObjects changes. """); waitForInputToContinue(scanner); // Populate a few more files in each bucket so that we can use // ListObjects and show the difference. String otherObject = "other/" + bucketObject; String altObject = "alt/" + bucketObject; String otherAltObject = "other/alt/" + bucketObject; try { s3DirectoriesActions.putObjectAsync(mS3RegularClient, mregularBucketName, otherObject, "").join(); s3DirectoriesActions.putObjectAsync(mS3ExpressClient, mdirectoryBucketName, otherObject, "").join(); s3DirectoriesActions.putObjectAsync(mS3RegularClient, mregularBucketName, altObject, "").join(); s3DirectoriesActions.putObjectAsync(mS3ExpressClient, mdirectoryBucketName, altObject, "").join(); s3DirectoriesActions.putObjectAsync(mS3RegularClient, mregularBucketName, otherAltObject, "").join(); s3DirectoriesActions.putObjectAsync(mS3ExpressClient, mdirectoryBucketName, otherAltObject, "").join(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof NoSuchBucketException) { logger.error("S3Exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } try { // List objects in both S3 buckets. List<String> dirBucketObjects = s3DirectoriesActions.listObjectsAsync(mS3ExpressClient, mdirectoryBucketName).join(); List<String> regBucketObjects = s3DirectoriesActions.listObjectsAsync(mS3RegularClient, mregularBucketName).join(); logger.info("Directory bucket content"); for (String obj : dirBucketObjects) { logger.info(obj); } logger.info("Regular bucket content"); for (String obj : regBucketObjects) { logger.info(obj); } } catch (CompletionException e) { logger.error("Async operation failed: {} ", e.getCause().getMessage()); return; } logger.info(""" Notice how the regular bucket lists objects in lexicographical order, while the directory bucket does not. This is because the regular bucket considers the whole "key" to be the object identifier, while the directory bucket actually creates directories and uses the object "key" as a path to the object. """); waitForInputToContinue(scanner); } /** * Demonstrates the performance difference between downloading an object from a directory bucket and a regular bucket. * * <p>This method: * <ul> * <li>Prompts the user to choose the number of downloads (default is 1,000).</li> * <li>Downloads the specified object from the directory bucket and measures the total time.</li> * <li>Downloads the same object from the regular bucket and measures the total time.</li> * <li>Compares the time differences and prints the results.</li> * </ul> * * <p>Note: The performance difference will be more pronounced if this example is run on an EC2 instance * in the same Availability Zone as the buckets. * * @param bucketObject the name of the object to download */ private static void demonstratePerformance(String bucketObject) { logger.info(DASHES); logger.info("6. Demonstrate the performance difference."); logger.info(""" Now, let's do a performance test. We'll download the same object from each bucket repeatedly and compare the total time needed. Note: the performance difference will be much more pronounced if this example is run in an EC2 instance in the same Availability Zone as the bucket. """); waitForInputToContinue(scanner); int downloads = 1000; // Default value. logger.info("The default number of downloads of the same object for this example is set at " + downloads + "."); // Ask if the user wants to download a different number. logger.info("Would you like to download the file a different number of times? (y/n): "); String response = scanner.next().trim().toLowerCase(); if (response.equals("y")) { int maxDownloads = 1_000_000; // Ask for a valid number of downloads. while (true) { logger.info("Enter a number between 1 and " + maxDownloads + " for the number of downloads: "); if (scanner.hasNextInt()) { downloads = scanner.nextInt(); if (downloads >= 1 && downloads <= maxDownloads) { break; } else { logger.info("Please enter a number between 1 and " + maxDownloads + "."); } } else { logger.info("Invalid input. Please enter a valid integer."); scanner.next(); } } logger.info("You have chosen to download {} items.", downloads); } else { logger.info("No changes made. Using default downloads: {}", downloads); } // Simulating the download process for the directory bucket. logger.info("Downloading from the directory bucket."); long directoryTimeStart = System.nanoTime(); for (int index = 0; index < downloads; index++) { if (index % 50 == 0) { logger.info("Download " + index + " of " + downloads); } try { // Get the object from the directory bucket. s3DirectoriesActions.getObjectAsync(mS3ExpressClient, mdirectoryBucketName, bucketObject).join(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof NoSuchKeyException) { logger.error("S3Exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } } long directoryTimeDifference = System.nanoTime() - directoryTimeStart; // Download from the regular bucket. logger.info("Downloading from the regular bucket."); long normalTimeStart = System.nanoTime(); for (int index = 0; index < downloads; index++) { if (index % 50 == 0) { logger.info("Download " + index + " of " + downloads); } try { s3DirectoriesActions.getObjectAsync(mS3RegularClient, mregularBucketName, bucketObject).join(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof NoSuchKeyException) { logger.error("S3Exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } } long normalTimeDifference = System.nanoTime() - normalTimeStart; logger.info("The directory bucket took " + directoryTimeDifference + " nanoseconds, while the regular bucket took " + normalTimeDifference + " nanoseconds."); long difference = normalTimeDifference - directoryTimeDifference; logger.info("That's a difference of " + difference + " nanoseconds, or"); logger.info(difference / 1_000_000_000.0 + " seconds."); if (difference < 0) { logger.info("The directory buckets were slower. This can happen if you are not running on the cloud within a VPC."); } waitForInputToContinue(scanner); } private static String createSessionAddObjects() { logger.info(DASHES); logger.info(""" 5. Create an object and copy it. We'll create an object consisting of some text and upload it to the regular bucket. """); waitForInputToContinue(scanner); String bucketObject = "basic-text-object.txt"; try { s3DirectoriesActions.putObjectAsync(mS3RegularClient, mregularBucketName, bucketObject, "Look Ma, I'm a bucket!").join(); s3DirectoriesActions.createSessionAsync(mS3ExpressClient, mdirectoryBucketName).join(); // Copy the object to the destination S3 bucket. s3DirectoriesActions.copyObjectAsync(mS3ExpressClient, mregularBucketName, bucketObject, mdirectoryBucketName, bucketObject).join(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof S3Exception) { logger.error("S3Exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } } logger.info(""" It worked! This is because the S3Client that performed the copy operation is the expressClient using the credentials for the user with permission to work with directory buckets. It's important to remember the user permissions when interacting with directory buckets. Instead of validating permissions on every call as regular buckets do, directory buckets utilize the user credentials and session token to validate. This allows for much faster connection speeds on every call. For single calls, this is low, but for many concurrent calls this adds up to a lot of time saved. """); waitForInputToContinue(scanner); return bucketObject; } /** * Creates VPC users for the S3 Express One Zone scenario. * <p> * This method performs the following steps: * <ol> * <li>Optionally creates a new VPC and VPC Endpoint if the application is running in an EC2 instance in the same Availability Zone as the directory buckets.</li> * <li>Creates two IAM users: one with S3 Express One Zone permissions and one without.</li> * </ol> * * @return a {@link UserNames} object containing the names of the created IAM users */ public static UserNames createVpcUsers() { /* Optionally create a VPC. Create two IAM users, one with S3 Express One Zone permissions and one without. */ logger.info(DASHES); logger.info(""" 1. First, we'll set up a new VPC and VPC Endpoint if this program is running in an EC2 instance in the same AZ as your\s directory buckets will be. Are you running this in an EC2 instance located in the same AZ as your intended directory buckets? """); logger.info("Do you want to setup a VPC Endpoint? (y/n)"); String endpointAns = scanner.nextLine().trim(); if (endpointAns.equalsIgnoreCase("y")) { logger.info(""" Great! Let's set up a VPC, retrieve the Route Table from it, and create a VPC Endpoint to connect the S3 Client to. """); try { s3DirectoriesActions.setupVPCAsync().join(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof Ec2Exception) { logger.error("IamException occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } } waitForInputToContinue(scanner); } else { logger.info("Skipping the VPC setup. Don't forget to use this in production!"); } logger.info(DASHES); logger.info(""" 2. Create a RegularUser and ExpressUser by using the AWS CDK. One IAM User, named RegularUser, will have permissions to work only with regular buckets and one IAM user, named ExpressUser, will have permissions to work only with directory buckets. """); waitForInputToContinue(scanner); // Create two users required for this scenario. Map<String, String> stackOutputs = createUsersUsingCDK(stackName); regularUser = stackOutputs.get("RegularUser"); expressUser = stackOutputs.get("ExpressUser"); UserNames names = new UserNames(); names.setRegularUserName(regularUser); names.setExpressUserName(expressUser); return names; } /** * Creates users using AWS CloudFormation. * * @return a {@link Map} of String keys and String values representing the stack outputs, * which may include user-related information such as user names and IDs. */ public static Map<String, String> createUsersUsingCDK(String stackName) { logger.info("We'll use an AWS CloudFormation template to create the IAM users and policies."); CloudFormationHelper.deployCloudFormationStack(stackName); return CloudFormationHelper.getStackOutputsAsync(stackName).join(); } /** * Sets up the necessary clients and buckets for the S3 Express service. * * @param expressUserName the username for the user with S3 Express permissions * @param regularUserName the username for the user with regular S3 permissions */ public static void setupClientsAndBuckets(String expressUserName, String regularUserName) { Scanner locscanner = new Scanner(System.in); String accessKeyIdforRegUser; String secretAccessforRegUser; try { CreateAccessKeyResponse keyResponse = s3DirectoriesActions.createAccessKeyAsync(regularUserName).join(); accessKeyIdforRegUser = keyResponse.accessKey().accessKeyId(); secretAccessforRegUser = keyResponse.accessKey().secretAccessKey(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof IamException) { logger.error("IamException occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } String accessKeyIdforExpressUser; String secretAccessforExpressUser; try { CreateAccessKeyResponse keyResponseExpress = s3DirectoriesActions.createAccessKeyAsync(expressUserName).join(); accessKeyIdforExpressUser = keyResponseExpress.accessKey().accessKeyId(); secretAccessforExpressUser = keyResponseExpress.accessKey().secretAccessKey(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof IamException) { logger.error("IamException occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } logger.info(DASHES); logger.info(""" 3. Create two S3Clients; one uses the ExpressUser's credentials and one uses the RegularUser's credentials. The 2 S3Clients will use different credentials. """); waitForInputToContinue(locscanner); try { mS3RegularClient = createS3ClientWithAccessKeyAsync(accessKeyIdforRegUser, secretAccessforRegUser).join(); mS3ExpressClient = createS3ClientWithAccessKeyAsync(accessKeyIdforExpressUser, secretAccessforExpressUser).join(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof IllegalArgumentException) { logger.error("An invalid argument exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } logger.info(""" We can now use the ExpressUser client to make calls to S3 Express operations. """); waitForInputToContinue(locscanner); logger.info(DASHES); logger.info(""" 4. Create two buckets. Now we will create a directory bucket which is the linchpin of the S3 Express One Zone service. Directory buckets behave differently from regular S3 buckets which we will explore here. We'll also create a regular bucket, put an object into the regular bucket, and copy it to the directory bucket. """); logger.info(""" Now, let's choose an availability zone (AZ) for the directory bucket. We'll choose one that is supported. """); String zoneId; String regularBucketName; try { zoneId = s3DirectoriesActions.selectAvailabilityZoneIdAsync().join(); regularBucketName = "reg-bucket-" + System.currentTimeMillis(); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof Ec2Exception) { logger.error("EC2Exception occurred: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); } return; } logger.info(""" Now, let's create the actual directory bucket, as well as a regular bucket." """); String directoryBucketName = "test-bucket-" + System.currentTimeMillis() + "--" + zoneId + "--x-s3"; try { s3DirectoriesActions.createDirectoryBucketAsync(mS3ExpressClient, directoryBucketName, zoneId).join(); logger.info("Created directory bucket {}", directoryBucketName); } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof BucketAlreadyExistsException) { logger.error("The bucket already exists. Moving on: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); return; } } // Assign to the data member. mdirectoryBucketName = directoryBucketName; try { s3DirectoriesActions.createBucketAsync(mS3RegularClient, regularBucketName).join(); logger.info("Created regular bucket {} ", regularBucketName); mregularBucketName = regularBucketName; } catch (CompletionException ce) { Throwable cause = ce.getCause(); if (cause instanceof BucketAlreadyExistsException) { logger.error("The bucket already exists. Moving on: {}", cause.getMessage(), ce); } else { logger.error("An unexpected error occurred: {}", cause.getMessage(), ce); return; } } logger.info("Great! Both buckets were created."); waitForInputToContinue(locscanner); } /** * Creates an asynchronous S3 client with the specified access key and secret access key. * * @param accessKeyId the AWS access key ID * @param secretAccessKey the AWS secret access key * @return a {@link CompletableFuture} that asynchronously creates the S3 client * @throws IllegalArgumentException if the access key ID or secret access key is null */ public static CompletableFuture<S3AsyncClient> createS3ClientWithAccessKeyAsync(String accessKeyId, String secretAccessKey) { return CompletableFuture.supplyAsync(() -> { // Validate input parameters if (accessKeyId == null || accessKeyId.isBlank() || secretAccessKey == null || secretAccessKey.isBlank()) { throw new IllegalArgumentException("Access Key ID and Secret Access Key must not be null or empty"); } AwsBasicCredentials awsCredentials = AwsBasicCredentials.create(accessKeyId, secretAccessKey); return S3AsyncClient.builder() .credentialsProvider(StaticCredentialsProvider.create(awsCredentials)) .region(Region.US_WEST_2) .build(); }); } private static void waitForInputToContinue(Scanner scanner) { while (true) { logger.info(""); logger.info("Enter 'c' followed by <ENTER> to continue:"); String input = scanner.nextLine(); if (input.trim().equalsIgnoreCase("c")) { logger.info("Continuing with the program..."); logger.info(""); break; } else { logger.info("Invalid input. Please try again."); } } } }

A wrapper class for Amazon S3 SDK methods.

public class S3DirectoriesActions { private static IamAsyncClient iamAsyncClient; private static Ec2AsyncClient ec2AsyncClient; private static final Logger logger = LoggerFactory.getLogger(S3DirectoriesActions.class); private static IamAsyncClient getIAMAsyncClient() { if (iamAsyncClient == null) { SdkAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder() .maxConcurrency(100) .connectionTimeout(Duration.ofSeconds(60)) .readTimeout(Duration.ofSeconds(60)) .writeTimeout(Duration.ofSeconds(60)) .build(); ClientOverrideConfiguration overrideConfig = ClientOverrideConfiguration.builder() .apiCallTimeout(Duration.ofMinutes(2)) .apiCallAttemptTimeout(Duration.ofSeconds(90)) .retryStrategy(RetryMode.STANDARD) .build(); iamAsyncClient = IamAsyncClient.builder() .httpClient(httpClient) .overrideConfiguration(overrideConfig) .build(); } return iamAsyncClient; } private static Ec2AsyncClient getEc2AsyncClient() { if (ec2AsyncClient == null) { SdkAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder() .maxConcurrency(100) .connectionTimeout(Duration.ofSeconds(60)) .readTimeout(Duration.ofSeconds(60)) .writeTimeout(Duration.ofSeconds(60)) .build(); ClientOverrideConfiguration overrideConfig = ClientOverrideConfiguration.builder() .apiCallTimeout(Duration.ofMinutes(2)) .apiCallAttemptTimeout(Duration.ofSeconds(90)) .retryStrategy(RetryMode.STANDARD) .build(); ec2AsyncClient = Ec2AsyncClient.builder() .httpClient(httpClient) .region(Region.US_WEST_2) .overrideConfiguration(overrideConfig) .build(); } return ec2AsyncClient; } /** * Deletes the specified S3 bucket and all the objects within it asynchronously. * * @param s3AsyncClient the S3 asynchronous client to use for the operations * @param bucketName the name of the S3 bucket to be deleted * @return a {@link CompletableFuture} that completes with a {@link WaiterResponse} containing the * {@link HeadBucketResponse} when the bucket has been successfully deleted * @throws CompletionException if there was an error deleting the bucket or its objects */ public CompletableFuture<WaiterResponse<HeadBucketResponse>> deleteBucketAndObjectsAsync(S3AsyncClient s3AsyncClient, String bucketName) { ListObjectsV2Request listRequest = ListObjectsV2Request.builder() .bucket(bucketName) .build(); return s3AsyncClient.listObjectsV2(listRequest) .thenCompose(listResponse -> { if (!listResponse.contents().isEmpty()) { List<ObjectIdentifier> objectIdentifiers = listResponse.contents().stream() .map(s3Object -> ObjectIdentifier.builder().key(s3Object.key()).build()) .collect(Collectors.toList()); DeleteObjectsRequest deleteRequest = DeleteObjectsRequest.builder() .bucket(bucketName) .delete(Delete.builder().objects(objectIdentifiers).build()) .build(); return s3AsyncClient.deleteObjects(deleteRequest) .thenAccept(deleteResponse -> { if (!deleteResponse.errors().isEmpty()) { deleteResponse.errors().forEach(error -> logger.error("Couldn't delete object " + error.key() + ". Reason: " + error.message())); } }); } return CompletableFuture.completedFuture(null); }) .thenCompose(ignored -> { DeleteBucketRequest deleteBucketRequest = DeleteBucketRequest.builder() .bucket(bucketName) .build(); return s3AsyncClient.deleteBucket(deleteBucketRequest); }) .thenCompose(ignored -> { S3AsyncWaiter waiter = s3AsyncClient.waiter(); HeadBucketRequest headBucketRequest = HeadBucketRequest.builder().bucket(bucketName).build(); return waiter.waitUntilBucketNotExists(headBucketRequest); }) .whenComplete((ignored, exception) -> { if (exception != null) { Throwable cause = exception.getCause(); if (cause instanceof S3Exception) { throw new CompletionException("Error deleting bucket: " + bucketName, cause); } throw new CompletionException("Failed to delete bucket and objects: " + bucketName, exception); } logger.info("Bucket deleted successfully: " + bucketName); }); } /** * Lists the objects in an S3 bucket asynchronously. * * @param s3Client the S3 async client to use for the operation * @param bucketName the name of the S3 bucket containing the objects to list * @return a {@link CompletableFuture} that contains the list of object keys in the specified bucket */ public CompletableFuture<List<String>> listObjectsAsync(S3AsyncClient s3Client, String bucketName) { ListObjectsV2Request request = ListObjectsV2Request.builder() .bucket(bucketName) .build(); return s3Client.listObjectsV2(request) .thenApply(response -> response.contents().stream() .map(S3Object::key) .toList()) .whenComplete((result, exception) -> { if (exception != null) { throw new CompletionException("Couldn't list objects in bucket: " + bucketName, exception); } }); } /** * Retrieves an object from an Amazon S3 bucket asynchronously. * * @param s3Client the S3 async client to use for the operation * @param bucketName the name of the S3 bucket containing the object * @param keyName the unique identifier (key) of the object to retrieve * @return a {@link CompletableFuture} that, when completed, contains the object's content as a {@link ResponseBytes} of {@link GetObjectResponse} */ public CompletableFuture<ResponseBytes<GetObjectResponse>> getObjectAsync(S3AsyncClient s3Client, String bucketName, String keyName) { GetObjectRequest objectRequest = GetObjectRequest.builder() .key(keyName) .bucket(bucketName) .build(); // Get the object asynchronously and transform it into a byte array return s3Client.getObject(objectRequest, AsyncResponseTransformer.toBytes()) .exceptionally(exception -> { Throwable cause = exception.getCause(); if (cause instanceof NoSuchKeyException) { throw new CompletionException("Failed to get the object. Reason: " + ((S3Exception) cause).awsErrorDetails().errorMessage(), cause); } throw new CompletionException("Failed to get the object", exception); }); } /** * Asynchronously copies an object from one S3 bucket to another. * * @param s3Client the S3 async client to use for the copy operation * @param sourceBucket the name of the source bucket * @param sourceKey the key of the object to be copied in the source bucket * @param destinationBucket the name of the destination bucket * @param destinationKey the key of the copied object in the destination bucket * @return a {@link CompletableFuture} that completes when the copy operation is finished */ public CompletableFuture<Void> copyObjectAsync(S3AsyncClient s3Client, String sourceBucket, String sourceKey, String destinationBucket, String destinationKey) { CopyObjectRequest copyRequest = CopyObjectRequest.builder() .sourceBucket(sourceBucket) .sourceKey(sourceKey) .destinationBucket(destinationBucket) .destinationKey(destinationKey) .build(); return s3Client.copyObject(copyRequest) .thenRun(() -> logger.info("Copied object '" + sourceKey + "' from bucket '" + sourceBucket + "' to bucket '" + destinationBucket + "'")) .whenComplete((ignored, exception) -> { if (exception != null) { Throwable cause = exception.getCause(); if (cause instanceof S3Exception) { throw new CompletionException("Couldn't copy object '" + sourceKey + "' from bucket '" + sourceBucket + "' to bucket '" + destinationBucket + "'. Reason: " + ((S3Exception) cause).awsErrorDetails().errorMessage(), cause); } throw new CompletionException("Failed to copy object", exception); } }); } /** * Asynchronously creates a session for the specified S3 bucket. * * @param s3Client the S3 asynchronous client to use for creating the session * @param bucketName the name of the S3 bucket for which to create the session * @return a {@link CompletableFuture} that completes when the session is created, or throws a {@link CompletionException} if an error occurs */ public CompletableFuture<CreateSessionResponse> createSessionAsync(S3AsyncClient s3Client, String bucketName) { CreateSessionRequest request = CreateSessionRequest.builder() .bucket(bucketName) .build(); return s3Client.createSession(request) .whenComplete((response, exception) -> { if (exception != null) { Throwable cause = exception.getCause(); if (cause instanceof S3Exception) { throw new CompletionException("Couldn't create the session. Reason: " + ((S3Exception) cause).awsErrorDetails().errorMessage(), cause); } throw new CompletionException("Unexpected error occurred while creating session", exception); } logger.info("Created session for bucket: " + bucketName); }); } /** * Creates a new S3 directory bucket in a specified Zone (For example, a * specified Availability Zone in this code example). * * @param s3Client The asynchronous S3 client used to create the bucket * @param bucketName The name of the bucket to be created * @param zone The Availability Zone where the bucket will be created * @throws CompletionException if there's an error creating the bucket */ public CompletableFuture<CreateBucketResponse> createDirectoryBucketAsync(S3AsyncClient s3Client, String bucketName, String zone) { logger.info("Creating bucket: " + bucketName); CreateBucketConfiguration bucketConfiguration = CreateBucketConfiguration.builder() .location(LocationInfo.builder() .type(LocationType.AVAILABILITY_ZONE) .name(zone) .build()) .bucket(BucketInfo.builder() .type(BucketType.DIRECTORY) .dataRedundancy(DataRedundancy.SINGLE_AVAILABILITY_ZONE) .build()) .build(); CreateBucketRequest bucketRequest = CreateBucketRequest.builder() .bucket(bucketName) .createBucketConfiguration(bucketConfiguration) .build(); return s3Client.createBucket(bucketRequest) .whenComplete((response, exception) -> { if (exception != null) { Throwable cause = exception.getCause(); if (cause instanceof BucketAlreadyExistsException) { throw new CompletionException("The bucket already exists: " + ((S3Exception) cause).awsErrorDetails().errorMessage(), cause); } throw new CompletionException("Unexpected error occurred while creating bucket", exception); } logger.info("Bucket created successfully with location: " + response.location()); }); } /** * Creates an S3 bucket asynchronously. * * @param s3Client the S3 async client to use for the bucket creation * @param bucketName the name of the S3 bucket to create * @return a {@link CompletableFuture} that completes with the {@link WaiterResponse} containing the {@link HeadBucketResponse} * when the bucket is successfully created * @throws CompletionException if there's an error creating the bucket */ public CompletableFuture<WaiterResponse<HeadBucketResponse>> createBucketAsync(S3AsyncClient s3Client, String bucketName) { CreateBucketRequest bucketRequest = CreateBucketRequest.builder() .bucket(bucketName) .build(); return s3Client.createBucket(bucketRequest) .thenCompose(response -> { S3AsyncWaiter s3Waiter = s3Client.waiter(); HeadBucketRequest bucketRequestWait = HeadBucketRequest.builder() .bucket(bucketName) .build(); return s3Waiter.waitUntilBucketExists(bucketRequestWait); }) .whenComplete((response, exception) -> { if (exception != null) { Throwable cause = exception.getCause(); if (cause instanceof BucketAlreadyExistsException) { throw new CompletionException("The S3 bucket exists: " + cause.getMessage(), cause); } else { throw new CompletionException("Failed to create access key: " + exception.getMessage(), exception); } } logger.info(bucketName + " is ready"); }); } /** * Uploads an object to an Amazon S3 bucket asynchronously. * * @param s3Client the S3 async client to use for the upload * @param bucketName the destination S3 bucket name * @param bucketObject the name of the object to be uploaded * @param text the content to be uploaded as the object */ public CompletableFuture<PutObjectResponse> putObjectAsync(S3AsyncClient s3Client, String bucketName, String bucketObject, String text) { PutObjectRequest objectRequest = PutObjectRequest.builder() .bucket(bucketName) .key(bucketObject) .build(); return s3Client.putObject(objectRequest, AsyncRequestBody.fromString(text)) .whenComplete((response, exception) -> { if (exception != null) { Throwable cause = exception.getCause(); if (cause instanceof NoSuchBucketException) { throw new CompletionException("The S3 bucket does not exist: " + cause.getMessage(), cause); } else { throw new CompletionException("Failed to create access key: " + exception.getMessage(), exception); } } }); } /** * Creates an AWS IAM access key asynchronously for the specified user name. * * @param userName the name of the IAM user for whom to create the access key * @return a {@link CompletableFuture} that completes with the {@link CreateAccessKeyResponse} containing the created access key */ public CompletableFuture<CreateAccessKeyResponse> createAccessKeyAsync(String userName) { CreateAccessKeyRequest request = CreateAccessKeyRequest.builder() .userName(userName) .build(); return getIAMAsyncClient().createAccessKey(request) .whenComplete((response, exception) -> { if (response != null) { logger.info("Access Key Created."); } else { if (exception == null) { Throwable cause = exception.getCause(); if (cause instanceof IamException) { throw new CompletionException("IAM error while creating access key: " + cause.getMessage(), cause); } else { throw new CompletionException("Failed to create access key: " + exception.getMessage(), exception); } } } }); } /** * Asynchronously selects an Availability Zone ID from the available EC2 zones. * * @return A {@link CompletableFuture} that resolves to the selected Availability Zone ID. * @throws CompletionException if an error occurs during the request or processing. */ public CompletableFuture<String> selectAvailabilityZoneIdAsync() { DescribeAvailabilityZonesRequest zonesRequest = DescribeAvailabilityZonesRequest.builder() .build(); return getEc2AsyncClient().describeAvailabilityZones(zonesRequest) .thenCompose(response -> { List<AvailabilityZone> zonesList = response.availabilityZones(); if (zonesList.isEmpty()) { logger.info("No availability zones found."); return CompletableFuture.completedFuture(null); // Return null if no zones are found } List<String> zoneIds = zonesList.stream() .map(AvailabilityZone::zoneId) // Get the zoneId (e.g., "usw2-az1") .toList(); return CompletableFuture.supplyAsync(() -> promptUserForZoneSelection(zonesList, zoneIds)) .thenApply(selectedZone -> { // Return only the selected Zone ID (e.g., "usw2-az1"). return selectedZone.zoneId(); }); }) .whenComplete((result, exception) -> { if (exception == null) { if (result != null) { logger.info("Selected Availability Zone ID: " + result); } else { logger.info("No availability zone selected."); } } else { Throwable cause = exception.getCause(); if (cause instanceof Ec2Exception) { throw new CompletionException("EC2 error while selecting availability zone: " + cause.getMessage(), cause); } throw new CompletionException("Failed to select availability zone: " + exception.getMessage(), exception); } }); } /** * Prompts the user to select an Availability Zone from the given list. * * @param zonesList the list of Availability Zones * @param zoneIds the list of zone IDs * @return the selected Availability Zone */ private static AvailabilityZone promptUserForZoneSelection(List<AvailabilityZone> zonesList, List<String> zoneIds) { Scanner scanner = new Scanner(System.in); int index = -1; while (index < 0 || index >= zoneIds.size()) { logger.info("Select an availability zone:"); IntStream.range(0, zoneIds.size()).forEach(i -> logger.info(i + ": " + zoneIds.get(i)) ); logger.info("Enter the number corresponding to your choice: "); if (scanner.hasNextInt()) { index = scanner.nextInt(); } else { scanner.next(); } } AvailabilityZone selectedZone = zonesList.get(index); logger.info("You selected: " + selectedZone.zoneId()); return selectedZone; } /** * Asynchronously sets up a new VPC, including creating the VPC, finding the associated route table, and * creating a VPC endpoint for the S3 service. * * @return a {@link CompletableFuture} that, when completed, contains a AbstractMap with the * VPC ID and VPC endpoint ID. */ public CompletableFuture<AbstractMap.SimpleEntry<String, String>> setupVPCAsync() { String cidr = "10.0.0.0/16"; CreateVpcRequest vpcRequest = CreateVpcRequest.builder() .cidrBlock(cidr) .build(); return getEc2AsyncClient().createVpc(vpcRequest) .thenCompose(vpcResponse -> { String vpcId = vpcResponse.vpc().vpcId(); logger.info("VPC Created: {}", vpcId); Ec2AsyncWaiter waiter = getEc2AsyncClient().waiter(); DescribeVpcsRequest request = DescribeVpcsRequest.builder() .vpcIds(vpcId) .build(); return waiter.waitUntilVpcAvailable(request) .thenApply(waiterResponse -> vpcId); }) .thenCompose(vpcId -> { Filter filter = Filter.builder() .name("vpc-id") .values(vpcId) .build(); DescribeRouteTablesRequest describeRouteTablesRequest = DescribeRouteTablesRequest.builder() .filters(filter) .build(); return getEc2AsyncClient().describeRouteTables(describeRouteTablesRequest) .thenApply(routeTablesResponse -> { if (routeTablesResponse.routeTables().isEmpty()) { throw new CompletionException("No route tables found for VPC: " + vpcId, null); } String routeTableId = routeTablesResponse.routeTables().get(0).routeTableId(); logger.info("Route table found: {}", routeTableId); return new AbstractMap.SimpleEntry<>(vpcId, routeTableId); }); }) .thenCompose(vpcAndRouteTable -> { String vpcId = vpcAndRouteTable.getKey(); String routeTableId = vpcAndRouteTable.getValue(); Region region = getEc2AsyncClient().serviceClientConfiguration().region(); String serviceName = String.format("com.amazonaws.%s.s3express", region.id()); CreateVpcEndpointRequest endpointRequest = CreateVpcEndpointRequest.builder() .vpcId(vpcId) .routeTableIds(routeTableId) .serviceName(serviceName) .build(); return getEc2AsyncClient().createVpcEndpoint(endpointRequest) .thenApply(vpcEndpointResponse -> { String vpcEndpointId = vpcEndpointResponse.vpcEndpoint().vpcEndpointId(); logger.info("VPC Endpoint created: {}", vpcEndpointId); return new AbstractMap.SimpleEntry<>(vpcId, vpcEndpointId); }); }) .exceptionally(exception -> { Throwable cause = exception.getCause() != null ? exception.getCause() : exception; if (cause instanceof Ec2Exception) { logger.error("EC2 error during VPC setup: {}", cause.getMessage(), cause); throw new CompletionException("EC2 error during VPC setup: " + cause.getMessage(), cause); } logger.error("VPC setup failed: {}", cause.getMessage(), cause); throw new CompletionException("VPC setup failed: " + cause.getMessage(), cause); }); } }

For a complete list of AWS SDK developer guides and code examples, see Developing with Amazon S3 using the AWS SDKs. This topic also includes information about getting started and details about previous SDK versions.

隐私网站条款Cookie 首选项
© 2025, Amazon Web Services, Inc. 或其附属公司。保留所有权利。