Nozioni di base su bucket e oggetti Amazon S3 utilizzando un AWS SDK - Amazon Simple Storage Service

Nozioni di base su bucket e oggetti Amazon S3 utilizzando un AWS SDK

Gli esempi di codice seguenti mostrano come:

  • Creare un bucket.

  • Caricare un file nel bucket.

  • Scaricare un oggetto da un bucket.

  • Copiare un oggetto in una sottocartella in un bucket.

  • Elencare gli oggetti in un bucket.

  • Eliminare gli oggetti in un bucket.

  • Eliminare un bucket.

.NET
AWS SDK for .NET
Nota

Ulteriori informazioni su GitHub. Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

using System; using System.IO; using System.Threading.Tasks; using Amazon.S3; public class S3_Basics { public static async Task Main() { // Create an Amazon S3 client object. The constructor uses the // default user installed on the system. To work with Amazon S3 // features in a different AWS Region, pass the AWS Region as a // parameter to the client constructor. IAmazonS3 client = new AmazonS3Client(); string bucketName = string.Empty; string filePath = string.Empty; string keyName = string.Empty; Console.WriteLine("Amazon Simple Storage Service (Amazon S3) basic"); Console.WriteLine("procedures. This application will:"); Console.WriteLine("\n\t1. Create a bucket"); Console.WriteLine("\n\t2. Upload an object to the new bucket"); Console.WriteLine("\n\t3. Copy the uploaded object to a folder in the bucket"); Console.WriteLine("\n\t4. List the items in the new bucket"); Console.WriteLine("\n\t5. Delete all the items in the bucket"); Console.WriteLine("\n\t6. Delete the bucket"); Console.WriteLine("----------------------------------------------------------"); // Create a bucket. Console.WriteLine("\nCreate a new Amazon S3 bucket.\n"); Console.Write("Please enter a name for the new bucket: "); bucketName = Console.ReadLine(); var success = await S3Bucket.CreateBucketAsync(client, bucketName); if (success) { Console.WriteLine($"Successfully created bucket: {bucketName}.\n"); } else { Console.WriteLine($"Could not create bucket: {bucketName}.\n"); } Console.WriteLine("Upload a file to the new bucket."); // Get the local path and filename for the file to upload. while (string.IsNullOrEmpty(filePath)) { Console.Write("Please enter the path and filename of the file to upload: "); filePath = Console.ReadLine(); // Confirm that the file exists on the local computer. if (!File.Exists(filePath)) { Console.WriteLine($"Couldn't find {filePath}. Try again.\n"); filePath = string.Empty; } } // Get the file name from the full path. keyName = Path.GetFileName(filePath); success = await S3Bucket.UploadFileAsync(client, bucketName, keyName, filePath); if (success) { Console.WriteLine($"Successfully uploaded {keyName} from {filePath} to {bucketName}.\n"); } else { Console.WriteLine($"Could not upload {keyName}.\n"); } // Set the file path to an empty string to avoid overwriting the // file we just uploaded to the bucket. filePath = string.Empty; // Now get a new location where we can save the file. while (string.IsNullOrEmpty(filePath)) { // First get the path to which the file will be downloaded. Console.Write("Please enter the path where the file will be downloaded: "); filePath = Console.ReadLine(); // Confirm that the file exists on the local computer. if (File.Exists($"{filePath}\\{keyName}")) { Console.WriteLine($"Sorry, the file already exists in that location.\n"); filePath = string.Empty; } } // Download an object from a bucket. success = await S3Bucket.DownloadObjectFromBucketAsync(client, bucketName, keyName, filePath); if (success) { Console.WriteLine($"Successfully downloaded {keyName}.\n"); } else { Console.WriteLine($"Sorry, could not download {keyName}.\n"); } // Copy the object to a different folder in the bucket. string folderName = string.Empty; while (string.IsNullOrEmpty(folderName)) { Console.Write("Please enter the name of the folder to copy your object to: "); folderName = Console.ReadLine(); } while (string.IsNullOrEmpty(keyName)) { // Get the name to give to the object once uploaded. Console.Write("Enter the name of the object to copy: "); keyName = Console.ReadLine(); } await S3Bucket.CopyObjectInBucketAsync(client, bucketName, keyName, folderName); // List the objects in the bucket. await S3Bucket.ListBucketContentsAsync(client, bucketName); // Delete the contents of the bucket. await S3Bucket.DeleteBucketContentsAsync(client, bucketName); // Deleting the bucket too quickly after deleting its contents will // cause an error that the bucket isn't empty. So... Console.WriteLine("Press <Enter> when you are ready to delete the bucket."); _ = Console.ReadLine(); // Delete the bucket. await S3Bucket.DeleteBucketAsync(client, bucketName); } }
C++
SDK per C++
Nota

Ulteriori informazioni su GitHub. Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

#include <iostream> #include <aws/core/Aws.h> #include <aws/s3/S3Client.h> #include <aws/s3/model/CopyObjectRequest.h> #include <aws/s3/model/CreateBucketRequest.h> #include <aws/s3/model/DeleteBucketRequest.h> #include <aws/s3/model/DeleteObjectRequest.h> #include <aws/s3/model/GetObjectRequest.h> #include <aws/s3/model/ListObjectsRequest.h> #include <aws/s3/model/PutObjectRequest.h> #include <aws/s3/model/BucketLocationConstraint.h> #include <aws/s3/model/CreateBucketConfiguration.h> #include <aws/core/utils/UUID.h> #include <aws/core/utils/StringUtils.h> #include <aws/core/utils/memory/stl/AWSAllocator.h> #include <aws/core/utils/memory/stl/AWSStreamFwd.h> #include <fstream> #include "awsdoc/s3/s3_examples.h" namespace AwsDoc { namespace S3 { //! Delete an S3 bucket. /*! \sa DeleteBucket() \param bucketName The S3 bucket's name. \param client An S3 client. */ static bool DeleteBucket(const Aws::String &bucketName, Aws::S3::S3Client &client); //! Delete an object in an S3 bucket. /*! \sa DeleteObjectFromBucket() \param bucketName The S3 bucket's name. \param key The key for the object in the S3 bucket. \param client An S3 client. */ static bool DeleteObjectFromBucket(const Aws::String &bucketName, const Aws::String &key, Aws::S3::S3Client &client); } } //! Scenario to create, copy, and delete S3 buckets and objects. /*! \sa S3_GettingStartedScenario() \param uploadFilePath Path to file to upload to an Amazon S3 bucket. \param saveFilePath Path for saving a downloaded S3 object. \param clientConfig Aws client configuration. */ bool AwsDoc::S3::S3_GettingStartedScenario(const Aws::String &uploadFilePath, const Aws::String &saveFilePath, const Aws::Client::ClientConfiguration &clientConfig) { Aws::S3::S3Client client(clientConfig); // Create a unique bucket name which is only temporary and will be deleted. // Format: "doc-example-bucket-" + lowercase UUID. Aws::String uuid = Aws::Utils::UUID::RandomUUID(); Aws::String bucketName = "doc-example-bucket-" + Aws::Utils::StringUtils::ToLower(uuid.c_str()); // 1. Create a bucket. { Aws::S3::Model::CreateBucketRequest request; request.SetBucket(bucketName); if (clientConfig.region != Aws::Region::US_EAST_1) { Aws::S3::Model::CreateBucketConfiguration createBucketConfiguration; createBucketConfiguration.WithLocationConstraint( Aws::S3::Model::BucketLocationConstraintMapper::GetBucketLocationConstraintForName( clientConfig.region)); request.WithCreateBucketConfiguration(createBucketConfiguration); } Aws::S3::Model::CreateBucketOutcome outcome = client.CreateBucket(request); if (!outcome.IsSuccess()) { const Aws::S3::S3Error &err = outcome.GetError(); std::cerr << "Error: CreateBucket: " << err.GetExceptionName() << ": " << err.GetMessage() << std::endl; return false; } else { std::cout << "Created the bucket, '" << bucketName << "', in the region, '" << clientConfig.region << "'." << std::endl; } } // 2. Upload a local file to the bucket. Aws::String key = "key-for-test"; { Aws::S3::Model::PutObjectRequest request; request.SetBucket(bucketName); request.SetKey(key); std::shared_ptr<Aws::FStream> input_data = Aws::MakeShared<Aws::FStream>("SampleAllocationTag", uploadFilePath, std::ios_base::in | std::ios_base::binary); if (!input_data->is_open()) { std::cerr << "Error: unable to open file, '" << uploadFilePath << "'." << std::endl; AwsDoc::S3::DeleteBucket(bucketName, client); return false; } request.SetBody(input_data); Aws::S3::Model::PutObjectOutcome outcome = client.PutObject(request); if (!outcome.IsSuccess()) { std::cerr << "Error: PutObject: " << outcome.GetError().GetMessage() << std::endl; AwsDoc::S3::DeleteObjectFromBucket(bucketName, key, client); AwsDoc::S3::DeleteBucket(bucketName, client); return false; } else { std::cout << "Added the object with the key, '" << key << "', to the bucket, '" << bucketName << "'." << std::endl; } } // 3. Download the object to a local file. { Aws::S3::Model::GetObjectRequest request; request.SetBucket(bucketName); request.SetKey(key); Aws::S3::Model::GetObjectOutcome outcome = client.GetObject(request); if (!outcome.IsSuccess()) { const Aws::S3::S3Error &err = outcome.GetError(); std::cerr << "Error: GetObject: " << err.GetExceptionName() << ": " << err.GetMessage() << std::endl; } else { std::cout << "Downloaded the object with the key, '" << key << "', in the bucket, '" << bucketName << "'." << std::endl; Aws::IOStream &ioStream = outcome.GetResultWithOwnership(). GetBody(); Aws::OFStream outStream(saveFilePath); if (!outStream.is_open()) { std::cout << "Error: unable to open file, '" << saveFilePath << "'." << std::endl; } else { outStream << ioStream.rdbuf(); std::cout << "Wrote the downloaded object to the file '" << saveFilePath << "'." << std::endl; } } } // 4. Copy the object to a different "folder" in the bucket. Aws::String copiedToKey = "test-folder/" + key; { Aws::S3::Model::CopyObjectRequest request; request.WithBucket(bucketName) .WithKey(copiedToKey) .WithCopySource(bucketName + "/" + key); Aws::S3::Model::CopyObjectOutcome outcome = client.CopyObject(request); if (!outcome.IsSuccess()) { std::cerr << "Error: CopyObject: " << outcome.GetError().GetMessage() << std::endl; } else { std::cout << "Copied the object with the key, '" << key << "', to the key, '" << copiedToKey << ", in the bucket, '" << bucketName << "'." << std::endl; } } // 5. List objects in the bucket. { Aws::S3::Model::ListObjectsRequest request; request.WithBucket(bucketName); Aws::S3::Model::ListObjectsOutcome outcome = client.ListObjects(request); if (!outcome.IsSuccess()) { std::cerr << "Error: ListObjects: " << outcome.GetError().GetMessage() << std::endl; } else { Aws::Vector<Aws::S3::Model::Object> objects = outcome.GetResult().GetContents(); std::cout << objects.size() << " objects in the bucket, '" << bucketName << "':" << std::endl; for (Aws::S3::Model::Object &object: objects) { std::cout << " '" << object.GetKey() << "'" << std::endl; } } } // 6. Delete all objects in the bucket. // All objects in the bucket must be deleted before deleting the bucket. AwsDoc::S3::DeleteObjectFromBucket(bucketName, copiedToKey, client); AwsDoc::S3::DeleteObjectFromBucket(bucketName, key, client); // 7. Delete the bucket. return AwsDoc::S3::DeleteBucket(bucketName, client); } bool AwsDoc::S3::DeleteObjectFromBucket(const Aws::String &bucketName, const Aws::String &key, Aws::S3::S3Client &client) { Aws::S3::Model::DeleteObjectRequest request; request.SetBucket(bucketName); request.SetKey(key); Aws::S3::Model::DeleteObjectOutcome outcome = client.DeleteObject(request); if (!outcome.IsSuccess()) { std::cerr << "Error: DeleteObject: " << outcome.GetError().GetMessage() << std::endl; } else { std::cout << "Deleted the object with the key, '" << key << "', from the bucket, '" << bucketName << "'." << std::endl; } return outcome.IsSuccess(); } bool AwsDoc::S3::DeleteBucket(const Aws::String &bucketName, Aws::S3::S3Client &client) { Aws::S3::Model::DeleteBucketRequest request; request.SetBucket(bucketName); Aws::S3::Model::DeleteBucketOutcome outcome = client.DeleteBucket(request); if (!outcome.IsSuccess()) { const Aws::S3::S3Error &err = outcome.GetError(); std::cerr << "Error: DeleteBucket: " << err.GetExceptionName() << ": " << err.GetMessage() << std::endl; } else { std::cout << "Deleted the bucket, '" << bucketName << "'." << std::endl; } return outcome.IsSuccess(); }
Go
SDK per Go V2
Nota

Ulteriori informazioni su GitHub. Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

// This bucket name is 100% unique. // Remember that bucket names must be globally unique among all buckets. myBucketName := "mybucket-" + (xid.New().String()) fmt.Printf("Bucket name: %v\n", myBucketName) cfg, err := config.LoadDefaultConfig(context.TODO()) if err != nil { panic("Failed to load configuration") } s3client := s3.NewFromConfig(cfg) MakeBucket(*s3client, myBucketName) BucketOps(*s3client, myBucketName) AccountBucketOps(*s3client, myBucketName) BucketDelOps(*s3client, myBucketName)
Java
SDK per Java 2.x
Nota

Ulteriori informazioni su GitHub. Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

/** * Before running this Java V2 code example, set up your development environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class S3Scenario { public static void main(String[] args) throws IOException { final String usage = "\n" + "Usage:\n" + " <bucketName> <key> <objectPath> <savePath> <toBucket>\n\n" + "Where:\n" + " bucketName - The Amazon S3 bucket to create.\n\n" + " key - The key to use.\n\n" + " objectPath - The path where the file is located (for example, C:/AWS/book2.pdf). "+ " savePath - The path where the file is saved after it's downloaded (for example, C:/AWS/book2.pdf). " + " toBucket - An Amazon S3 bucket to where an object is copied to (for example, C:/AWS/book2.pdf). "; if (args.length != 5) { System.out.println(usage); System.exit(1); } String bucketName = args[0]; String key = args[1]; String objectPath = args[2]; String savePath = args[3]; String toBucket = args[4] ; ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create(); Region region = Region.US_EAST_1; S3Client s3 = S3Client.builder() .region(region) .credentialsProvider(credentialsProvider) .build(); // Create an Amazon S3 bucket. createBucket(s3, bucketName); // Update a local file to the Amazon S3 bucket. uploadLocalFile(s3, bucketName, key, objectPath); // Download the object to another local file. getObjectBytes (s3, bucketName, key, savePath); // Perform a multipart upload. String multipartKey = "multiPartKey"; multipartUpload(s3, toBucket, multipartKey); // List all objects located in the Amazon S3 bucket. // Show 2 ways listAllObjects(s3, bucketName); anotherListExample(s3, bucketName) ; // Copy the object to another Amazon S3 bucket copyBucketObject (s3, bucketName, key, toBucket); // Delete the object from the Amazon S3 bucket. deleteObjectFromBucket(s3, bucketName, key); // Delete the Amazon S3 bucket deleteBucket(s3, bucketName); System.out.println("All Amazon S3 operations were successfully performed"); s3.close(); } // Create a bucket by using a S3Waiter object public static void createBucket( S3Client s3Client, String bucketName) { try { S3Waiter s3Waiter = s3Client.waiter(); CreateBucketRequest bucketRequest = CreateBucketRequest.builder() .bucket(bucketName) .build(); s3Client.createBucket(bucketRequest); HeadBucketRequest bucketRequestWait = HeadBucketRequest.builder() .bucket(bucketName) .build(); // Wait until the bucket is created and print out the response. WaiterResponse<HeadBucketResponse> waiterResponse = s3Waiter.waitUntilBucketExists(bucketRequestWait); waiterResponse.matched().response().ifPresent(System.out::println); System.out.println(bucketName +" is ready"); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void deleteBucket(S3Client client, String bucket) { DeleteBucketRequest deleteBucketRequest = DeleteBucketRequest.builder() .bucket(bucket) .build(); client.deleteBucket(deleteBucketRequest); System.out.println(bucket +" was deleted."); } /** * Upload an object in parts */ private static void multipartUpload(S3Client s3, String bucketName, String key) { int mB = 1024 * 1024; // First create a multipart upload and get the upload id CreateMultipartUploadRequest createMultipartUploadRequest = CreateMultipartUploadRequest.builder() .bucket(bucketName) .key(key) .build(); CreateMultipartUploadResponse response = s3.createMultipartUpload(createMultipartUploadRequest); String uploadId = response.uploadId(); System.out.println(uploadId); // Upload all the different parts of the object UploadPartRequest uploadPartRequest1 = UploadPartRequest.builder() .bucket(bucketName) .key(key) .uploadId(uploadId) .partNumber(1).build(); String etag1 = s3.uploadPart(uploadPartRequest1, RequestBody.fromByteBuffer(getRandomByteBuffer(5 * mB))).eTag(); CompletedPart part1 = CompletedPart.builder().partNumber(1).eTag(etag1).build(); UploadPartRequest uploadPartRequest2 = UploadPartRequest.builder().bucket(bucketName).key(key) .uploadId(uploadId) .partNumber(2).build(); String etag2 = s3.uploadPart(uploadPartRequest2, RequestBody.fromByteBuffer(getRandomByteBuffer(3 * mB))).eTag(); CompletedPart part2 = CompletedPart.builder().partNumber(2).eTag(etag2).build(); // Call completeMultipartUpload operation to tell S3 to merge all uploaded // parts and finish the multipart operation. CompletedMultipartUpload completedMultipartUpload = CompletedMultipartUpload.builder() .parts(part1, part2) .build(); CompleteMultipartUploadRequest completeMultipartUploadRequest = CompleteMultipartUploadRequest.builder() .bucket(bucketName) .key(key) .uploadId(uploadId) .multipartUpload(completedMultipartUpload) .build(); s3.completeMultipartUpload(completeMultipartUploadRequest); } private static ByteBuffer getRandomByteBuffer(int size) { byte[] b = new byte[size]; new Random().nextBytes(b); return ByteBuffer.wrap(b); } // Return a byte array private static byte[] getObjectFile(String filePath) { FileInputStream fileInputStream = null; byte[] bytesArray = null; try { File file = new File(filePath); bytesArray = new byte[(int) file.length()]; fileInputStream = new FileInputStream(file); fileInputStream.read(bytesArray); } catch (IOException e) { e.printStackTrace(); } finally { if (fileInputStream != null) { try { fileInputStream.close(); } catch (IOException e) { e.printStackTrace(); } } } return bytesArray; } public static void getObjectBytes (S3Client s3, String bucketName, String keyName, String path ) { try { GetObjectRequest objectRequest = GetObjectRequest .builder() .key(keyName) .bucket(bucketName) .build(); ResponseBytes<GetObjectResponse> objectBytes = s3.getObjectAsBytes(objectRequest); byte[] data = objectBytes.asByteArray(); // Write the data to a local file. File myFile = new File(path ); OutputStream os = new FileOutputStream(myFile); os.write(data); System.out.println("Successfully obtained bytes from an S3 object"); os.close(); } catch (IOException ex) { ex.printStackTrace(); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } } public static void uploadLocalFile(S3Client s3, String bucketName, String key, String objectPath) { PutObjectRequest objectRequest = PutObjectRequest.builder() .bucket(bucketName) .key(key) .build(); s3.putObject(objectRequest, RequestBody.fromBytes(getObjectFile(objectPath))); } public static void listAllObjects(S3Client s3, String bucketName) { ListObjectsV2Request listObjectsReqManual = ListObjectsV2Request.builder() .bucket(bucketName) .maxKeys(1) .build(); boolean done = false; while (!done) { ListObjectsV2Response listObjResponse = s3.listObjectsV2(listObjectsReqManual); for (S3Object content : listObjResponse.contents()) { System.out.println(content.key()); } if (listObjResponse.nextContinuationToken() == null) { done = true; } listObjectsReqManual = listObjectsReqManual.toBuilder() .continuationToken(listObjResponse.nextContinuationToken()) .build(); } } public static void anotherListExample(S3Client s3, String bucketName) { ListObjectsV2Request listReq = ListObjectsV2Request.builder() .bucket(bucketName) .maxKeys(1) .build(); ListObjectsV2Iterable listRes = s3.listObjectsV2Paginator(listReq); // Process response pages listRes.stream() .flatMap(r -> r.contents().stream()) .forEach(content -> System.out.println(" Key: " + content.key() + " size = " + content.size())); // Helper method to work with paginated collection of items directly listRes.contents().stream() .forEach(content -> System.out.println(" Key: " + content.key() + " size = " + content.size())); for (S3Object content : listRes.contents()) { System.out.println(" Key: " + content.key() + " size = " + content.size()); } } public static void deleteObjectFromBucket(S3Client s3, String bucketName, String key) { DeleteObjectRequest deleteObjectRequest = DeleteObjectRequest.builder() .bucket(bucketName) .key(key) .build(); s3.deleteObject(deleteObjectRequest); System.out.println(key +" was deleted"); } public static String copyBucketObject (S3Client s3, String fromBucket, String objectKey, String toBucket) { String encodedUrl = null; try { encodedUrl = URLEncoder.encode(fromBucket + "/" + objectKey, StandardCharsets.UTF_8.toString()); } catch (UnsupportedEncodingException e) { System.out.println("URL could not be encoded: " + e.getMessage()); } CopyObjectRequest copyReq = CopyObjectRequest.builder() .copySource(encodedUrl) .destinationBucket(toBucket) .destinationKey(objectKey) .build(); try { CopyObjectResponse copyRes = s3.copyObject(copyReq); System.out.println("The "+ objectKey +" was copied to "+toBucket); return copyRes.copyObjectResult().toString(); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } return ""; } }
JavaScript
SDK per JavaScript v3
Nota

Ulteriori informazioni su GitHub. Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

import { CreateBucketCommand, PutObjectCommand, CopyObjectCommand, DeleteObjectCommand, DeleteBucketCommand, GetObjectCommand } from "@aws-sdk/client-s3"; import { s3Client } from "../libs/s3Client.js"; // Helper function that creates an Amazon S3 service client module. if (process.argv.length < 5) { console.log( "Usage: node s3_basics.js <the bucket name> <the AWS Region to use> <object name> <object content>\n" + "Example: node s3_basics_full.js test-bucket 'test.txt' 'Test Content'" ); } const bucket_name = process.argv[2]; const object_key = process.argv[3]; const object_content = process.argv[4]; export const run = async (bucket_name, object_key, object_content) => { try { const create_bucket_params = { Bucket: bucket_name }; console.log("\nCreating the bucket, named " + bucket_name + "...\n"); console.log("about to create"); const data = await s3Client.send( new CreateBucketCommand(create_bucket_params) ); console.log("Bucket created at ", data.Location); try { console.log( "\nCreated and uploaded an object named " + object_key + " to first bucket " + bucket_name + " ...\n" ); // Set the parameters for the object to upload. const object_upload_params = { Bucket: bucket_name, // Specify the name of the new object. For example, 'test.html'. // To create a directory for the object, use '/'. For example, 'myApp/package.json'. Key: object_key, // Content of the new object. Body: object_content, }; // Create and upload the object to the first S3 bucket. await s3Client.send(new PutObjectCommand(object_upload_params)); console.log( "Successfully uploaded object: " + object_upload_params.Bucket + "/" + object_upload_params.Key ); try { const download_bucket_params = { Bucket: bucket_name, Key: object_key }; console.log( "\nDownloading " + object_key + " from" + bucket_name + " ...\n" ); // Create a helper function to convert a ReadableStream into a string. const streamToString = (stream) => new Promise((resolve, reject) => { const chunks = []; stream.on("data", (chunk) => chunks.push(chunk)); stream.on("error", reject); stream.on("end", () => resolve(Buffer.concat(chunks).toString("utf8"))); }); // Get the object from the Amazon S3 bucket. It is returned as a ReadableStream. const data = await s3Client.send(new GetObjectCommand(download_bucket_params)); // Convert the ReadableStream to a string. const bodyContents = await streamToString(data.Body); console.log(bodyContents); try { // Copy the object from the first bucket to the second bucket. const copy_object_params = { Bucket: bucket_name, CopySource: "/" + bucket_name + "/" + object_key, Key: "copy-destination/" + object_key, }; console.log( "\nCopying " + object_key + " from" + bucket_name + " to " + bucket_name + "/" + copy_object_params.Key + " ...\n" ); await s3Client.send(new CopyObjectCommand(copy_object_params)); console.log("Success, object copied to folder."); try { console.log("\nDeleting " + object_key + " from" + bucket_name); const delete_object_from_bucket_params = { Bucket: bucket_name, Key: object_key, }; await s3Client.send( new DeleteObjectCommand(delete_object_from_bucket_params) ); console.log("Success. Object deleted from bucket."); try { console.log( "\nDeleting " + object_key + " from " + bucket_name + "/copy-destination folder" ); const delete_object_from_folder_params = { Bucket: bucket_name, Key: "copy-destination/" + object_key, }; await s3Client.send( new DeleteObjectCommand(delete_object_from_folder_params) ); console.log("Success. Object deleted from folder."); try { console.log( "\nDeleting the bucket named " + bucket_name + "...\n" ); const delete_bucket_params = {Bucket: bucket_name}; await s3Client.send( new DeleteBucketCommand(delete_bucket_params) ); console.log("Success. First bucket deleted."); return "Run successfully"; // For unit tests. } catch (err) { console.log("Error deleting object from folder.", err); process.exit(1); } } catch (err) { console.log("Error deleting bucket.", err); process.exit(1); } } catch (err) { console.log("Error deleting object from bucket.", err); process.exit(1); } } catch (err) { console.log("Error copying object from to folder", err); process.exit(1); } } catch (err) { console.log("Error downloading object", err); process.exit(1); } }catch (err) { console.log("Error creating and upload object to bucket", err); process.exit(1); } console.log("works"); } catch (err) { console.log("Error creating bucket", err); } }; run(bucket_name, object_key, object_content);
Kotlin
SDK per Kotlin
Nota

Si tratta di una documentazione di pre-rilascio di una caratteristica nella versione di anteprima ed è soggetta a modifiche.

Nota

Ulteriori informazioni su GitHub. Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

suspend fun main(args: Array<String>) { val usage = """ Usage: <bucketName> <key> <objectPath> <savePath> <toBucket> Where: bucketName - The Amazon S3 bucket to create. key - The key to use. objectPath - The path where the file is located (for example, C:/AWS/book2.pdf). savePath - The path where the file is saved after it's downloaded (for example, C:/AWS/book2.pdf). toBucket - An Amazon S3 bucket to where an object is copied to (for example, C:/AWS/book2.pdf). """ if (args.size != 4) { println(usage) exitProcess(1) } val bucketName = args[0] val key = args[1] val objectPath = args[2] val savePath = args[3] val toBucket = args[4] // Create an Amazon S3 bucket. createBucket(bucketName) // Update a local file to the Amazon S3 bucket. putObject(bucketName, key, objectPath) // Download the object to another local file. getObject(bucketName, key, savePath) // List all objects located in the Amazon S3 bucket. listBucketObs(bucketName) // Copy the object to another Amazon S3 bucket copyBucketOb(bucketName, key, toBucket) // Delete the object from the Amazon S3 bucket. deleteBucketObs(bucketName, key) // Delete the Amazon S3 bucket. deleteBucket(bucketName) println("All Amazon S3 operations were successfully performed") } suspend fun createBucket(bucketName: String) { val request = CreateBucketRequest { bucket = bucketName } S3Client { region = "us-east-1" }.use { s3 -> s3.createBucket(request) println("$bucketName is ready") } } suspend fun putObject(bucketName: String, objectKey: String, objectPath: String) { val metadataVal = mutableMapOf<String, String>() metadataVal["myVal"] = "test" val request = PutObjectRequest { bucket = bucketName key = objectKey metadata = metadataVal this.body = Paths.get(objectPath).asByteStream() } S3Client { region = "us-east-1" }.use { s3 -> val response = s3.putObject(request) println("Tag information is ${response.eTag}") } } suspend fun getObject(bucketName: String, keyName: String, path: String) { val request = GetObjectRequest { key = keyName bucket = bucketName } S3Client { region = "us-east-1" }.use { s3 -> s3.getObject(request) { resp -> val myFile = File(path) resp.body?.writeToFile(myFile) println("Successfully read $keyName from $bucketName") } } } suspend fun listBucketObs(bucketName: String) { val request = ListObjectsRequest { bucket = bucketName } S3Client { region = "us-east-1" }.use { s3 -> val response = s3.listObjects(request) response.contents?.forEach { myObject -> println("The name of the key is ${myObject.key}") println("The owner is ${myObject.owner}") } } } suspend fun copyBucketOb(fromBucket: String, objectKey: String, toBucket: String) { var encodedUrl = "" try { encodedUrl = URLEncoder.encode("$fromBucket/$objectKey", StandardCharsets.UTF_8.toString()) } catch (e: UnsupportedEncodingException) { println("URL could not be encoded: " + e.message) } val request = CopyObjectRequest { copySource = encodedUrl bucket = toBucket key = objectKey } S3Client { region = "us-east-1" }.use { s3 -> s3.copyObject(request) } } suspend fun deleteBucketObs(bucketName: String, objectName: String) { val objectId = ObjectIdentifier { key = objectName } val delOb = Delete { objects = listOf(objectId) } val request = DeleteObjectsRequest { bucket = bucketName delete = delOb } S3Client { region = "us-east-1" }.use { s3 -> s3.deleteObjects(request) println("$objectName was deleted from $bucketName") } } suspend fun deleteBucket(bucketName: String?) { val request = DeleteBucketRequest { bucket = bucketName } S3Client { region = "us-east-1" }.use { s3 -> s3.deleteBucket(request) println("The $bucketName was successfully deleted!") } }
PHP
SDK per PHP
Nota

Ulteriori informazioni su GitHub. Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

require 'vendor/autoload.php'; use Aws\S3\S3Client; echo("--------------------------------------\n"); print("Welcome to the Amazon S3 getting started demo using PHP!\n"); echo("--------------------------------------\n"); $region = 'us-west-2'; $version = 'latest'; $s3client = new S3Client([ 'region' => $region, 'version' => $version ]); /* Inline declaration example $s3client = new Aws\S3\S3Client(['region' => 'us-west-2', 'version' => 'latest']); */ $bucket_name = "doc-example-bucket-" . uniqid(); try { $s3client->createBucket([ 'Bucket' => $bucket_name, 'CreateBucketConfiguration' => ['LocationConstraint' => $region], ]); echo "Created bucket named: $bucket_name \n"; } catch (Exception $exception) { echo "Failed to create bucket $bucket_name with error: " . $exception->getMessage(); exit("Please fix error with bucket creation before continuing."); } $file_name = "local-file-" . uniqid(); try { $s3client->putObject([ 'Bucket' => $bucket_name, 'Key' => $file_name, 'SourceFile' => 'testfile.txt' ]); echo "Uploaded $file_name to $bucket_name.\n"; } catch (Exception $exception) { echo "Failed to upload $file_name with error: " . $exception->getMessage(); exit("Please fix error with file upload before continuing."); } try { $file = $s3client->getObject([ 'Bucket' => $bucket_name, 'Key' => $file_name, ]); $body = $file->get('Body'); $body->rewind(); echo "Downloaded the file and it begins with: {$body->read(26)}.\n"; } catch (Exception $exception) { echo "Failed to download $file_name from $bucket_name with error: " . $exception->getMessage(); exit("Please fix error with file downloading before continuing."); } try { $folder = "copied-folder"; $s3client->copyObject([ 'Bucket' => $bucket_name, 'CopySource' => "$bucket_name/$file_name", 'Key' => "$folder/$file_name-copy", ]); echo "Copied $file_name to $folder/$file_name-copy.\n"; } catch (Exception $exception) { echo "Failed to copy $file_name with error: " . $exception->getMessage(); exit("Please fix error with object copying before continuing."); } try { $contents = $s3client->listObjects([ 'Bucket' => $bucket_name, ]); echo "The contents of your bucket are: \n"; foreach ($contents['Contents'] as $content) { echo $content['Key'] . "\n"; } } catch (Exception $exception) { echo "Failed to list objects in $bucket_name with error: " . $exception->getMessage(); exit("Please fix error with listing objects before continuing."); } try { $objects = []; foreach ($contents['Contents'] as $content) { $objects[] = [ 'Key' => $content['Key'], ]; } $s3client->deleteObjects([ 'Bucket' => $bucket_name, 'Key' => $file_name, 'Delete' => [ 'Objects' => $objects, ], ]); $check = $s3client->listObjects([ 'Bucket' => $bucket_name, ]); if (count($check) <= 0) { throw new Exception("Bucket wasn't empty."); } echo "Deleted all objects and folders from $bucket_name.\n"; } catch (Exception $exception) { echo "Failed to delete $file_name from $bucket_name with error: " . $exception->getMessage(); exit("Please fix error with object deletion before continuing."); } try { $s3client->deleteBucket([ 'Bucket' => $bucket_name, ]); echo "Deleted bucket $bucket_name.\n"; } catch (Exception $exception) { echo "Failed to delete $bucket_name with error: " . $exception->getMessage(); exit("Please fix error with bucket deletion before continuing."); } echo "Successfully ran the Amazon S3 with PHP demo.\n";
Python
SDK per Python (Boto3)
Nota

Ulteriori informazioni su GitHub. Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

import io import os import uuid import boto3 from boto3.s3.transfer import S3UploadFailedError from botocore.exceptions import ClientError def do_scenario(s3_resource): print('-'*88) print("Welcome to the Amazon S3 getting started demo!") print('-'*88) bucket_name = f'doc-example-bucket-{uuid.uuid4()}' bucket = s3_resource.Bucket(bucket_name) try: bucket.create( CreateBucketConfiguration={ 'LocationConstraint': s3_resource.meta.client.meta.region_name}) print(f"Created demo bucket named {bucket.name}.") except ClientError as err: print(f"Tried and failed to create demo bucket {bucket_name}.") print(f"\t{err.response['Error']['Code']}:{err.response['Error']['Message']}") print(f"\nCan't continue the demo without a bucket!") return file_name = None while file_name is None: file_name = input("\nEnter a file you want to upload to your bucket: ") if not os.path.exists(file_name): print(f"Couldn't find file {file_name}. Are you sure it exists?") file_name = None obj = bucket.Object(os.path.basename(file_name)) try: obj.upload_file(file_name) print(f"Uploaded file {file_name} into bucket {bucket.name} with key {obj.key}.") except S3UploadFailedError as err: print(f"Couldn't upload file {file_name} to {bucket.name}.") print(f"\t{err}") answer = input(f"\nDo you want to download {obj.key} into memory (y/n)? ") if answer.lower() == 'y': data = io.BytesIO() try: obj.download_fileobj(data) data.seek(0) print(f"Got your object. Here are the first 20 bytes:\n") print(f"\t{data.read(20)}") except ClientError as err: print(f"Couldn't download {obj.key}.") print(f"\t{err.response['Error']['Code']}:{err.response['Error']['Message']}") answer = input( f"\nDo you want to copy {obj.key} to a subfolder in your bucket (y/n)? ") if answer.lower() == 'y': dest_obj = bucket.Object(f'demo-folder/{obj.key}') try: dest_obj.copy({'Bucket': bucket.name, 'Key': obj.key}) print(f"Copied {obj.key} to {dest_obj.key}.") except ClientError as err: print(f"Couldn't copy {obj.key} to {dest_obj.key}.") print(f"\t{err.response['Error']['Code']}:{err.response['Error']['Message']}") print("\nYour bucket contains the following objects:") try: for o in bucket.objects.all(): print(f"\t{o.key}") except ClientError as err: print(f"Couldn't list the objects in bucket {bucket.name}.") print(f"\t{err.response['Error']['Code']}:{err.response['Error']['Message']}") answer = input( "\nDo you want to delete all of the objects as well as the bucket (y/n)? ") if answer.lower() == 'y': try: bucket.objects.delete() bucket.delete() print(f"Emptied and deleted bucket {bucket.name}.\n") except ClientError as err: print(f"Couldn't empty and delete bucket {bucket.name}.") print(f"\t{err.response['Error']['Code']}:{err.response['Error']['Message']}") print("Thanks for watching!") print('-'*88) if __name__ == '__main__': do_scenario(boto3.resource('s3'))
Ruby
SDK per Ruby
Nota

Ulteriori informazioni su GitHub. Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

require "aws-sdk-s3" # Wraps the getting started scenario actions. class ScenarioGettingStarted attr_reader :s3_resource # @param s3_resource [Aws::S3::Resource] An Amazon S3 resource. def initialize(s3_resource) @s3_resource = s3_resource end # Creates a bucket with a random name in the currently configured account and # AWS Region. # # @return [Aws::S3::Bucket] The newly created bucket. def create_bucket bucket = @s3_resource.create_bucket( bucket: "doc-example-bucket-#{Random.uuid}", create_bucket_configuration: { location_constraint: "us-east-2" # Note: only certain regions permitted } ) puts("Created demo bucket named #{bucket.name}.") rescue Aws::Errors::ServiceError => e puts("Tried and failed to create demo bucket.") puts("\t#{e.code}: #{e.message}") puts("\nCan't continue the demo without a bucket!") raise else bucket end # Requests a file name from the user. # # @return The name of the file. def create_file File.open("demo.txt", w) { |f| f.write("This is a demo file.") } end # Uploads a file to an Amazon S3 bucket. # # @param bucket [Aws::S3::Bucket] The bucket object representing the upload destination # @return [Aws::S3::Object] The Amazon S3 object that contains the uploaded file. def upload_file(bucket) File.open("demo.txt", "w+") { |f| f.write("This is a demo file.") } s3_object = bucket.object(File.basename("demo.txt")) s3_object.upload_file("demo.txt") puts("Uploaded file demo.txt into bucket #{bucket.name} with key #{s3_object.key}.") rescue Aws::Errors::ServiceError => e puts("Couldn't upload file demo.txt to #{bucket.name}.") puts("\t#{e.code}: #{e.message}") raise else s3_object end # Downloads an Amazon S3 object to a file. # # @param s3_object [Aws::S3::Object] The object to download. def download_file(s3_object) puts("\nDo you want to download #{s3_object.key} to a local file (y/n)? ") answer = gets.chomp.downcase if answer == "y" puts("Enter a name for the downloaded file: ") file_name = gets.chomp s3_object.download_file(file_name) puts("Object #{s3_object.key} successfully downloaded to #{file_name}.") end rescue Aws::Errors::ServiceError => e puts("Couldn't download #{s3_object.key}.") puts("\t#{e.code}: #{e.message}") raise end # Copies an Amazon S3 object to a subfolder within the same bucket. # # @param source_object [Aws::S3::Object] The source object to copy. # @return [Aws::S3::Object, nil] The destination object. def copy_object(source_object) dest_object = nil puts("\nDo you want to copy #{source_object.key} to a subfolder in your bucket (y/n)? ") answer = gets.chomp.downcase if answer == "y" dest_object = source_object.bucket.object("demo-folder/#{source_object.key}") dest_object.copy_from(source_object) puts("Copied #{source_object.key} to #{dest_object.key}.") end rescue Aws::Errors::ServiceError => e puts("Couldn't copy #{source_object.key}.") puts("\t#{e.code}: #{e.message}") raise else dest_object end # Lists the objects in an Amazon S3 bucket. # # @param bucket [Aws::S3::Bucket] The bucket to query. def list_objects(bucket) puts("\nYour bucket contains the following objects:") bucket.objects.each do |obj| puts("\t#{obj.key}") end rescue Aws::Errors::ServiceError => e puts("Couldn't list the objects in bucket #{bucket.name}.") puts("\t#{e.code}: #{e.message}") raise end # Deletes the objects in an Amazon S3 bucket and deletes the bucket. # # @param bucket [Aws::S3::Bucket] The bucket to empty and delete. def delete_bucket(bucket) puts("\nDo you want to delete all of the objects as well as the bucket (y/n)? ") answer = gets.chomp.downcase if answer == "y" bucket.objects.batch_delete! bucket.delete puts("Emptied and deleted bucket #{bucket.name}.\n") end rescue Aws::Errors::ServiceError => e puts("Couldn't empty and delete bucket #{bucket.name}.") puts("\t#{e.code}: #{e.message}") raise end end # Runs the Amazon S3 getting started scenario. def run_scenario(scenario) puts("-" * 88) puts("Welcome to the Amazon S3 getting started demo!") puts("-" * 88) bucket = scenario.create_bucket s3_object = scenario.upload_file(bucket) scenario.download_file(s3_object) scenario.copy_object(s3_object) scenario.list_objects(bucket) scenario.delete_bucket(bucket) puts("Thanks for watching!") puts("-" * 88) rescue Aws::Errors::ServiceError puts("Something went wrong with the demo!") end run_scenario(ScenarioGettingStarted.new(Aws::S3::Resource.new)) if $PROGRAM_NAME == __FILE__
Rust
SDK per Rust
Nota

Questa documentazione è per un SDK in anteprima. L'SDK è soggetto a modifiche e non deve essere usato in ambiente di produzione.

Nota

Ulteriori informazioni su GitHub. Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

Codice per la crate binario che esegue lo scenario.

use aws_config::meta::region::RegionProviderChain; use aws_sdk_s3::{Client, Error, Region}; use uuid::Uuid; #[tokio::main] async fn main() -> Result<(), Error> { let (region, client, bucket_name, file_name, key, target_key) = initialize_variables().await; if let Err(e) = run_s3_operations(region, client, bucket_name, file_name, key, target_key).await { println!("{:?}", e); }; Ok(()) } async fn initialize_variables() -> (Region, Client, String, String, String, String) { let region_provider = RegionProviderChain::first_try(Region::new("us-west-2")); let region = region_provider.region().await.unwrap(); let shared_config = aws_config::from_env().region(region_provider).load().await; let client = Client::new(&shared_config); let bucket_name = format!("doc-example-bucket-{}", Uuid::new_v4()); let file_name = "s3/testfile.txt".to_string(); let key = "test file key name".to_string(); let target_key = "target_key".to_string(); (region, client, bucket_name, file_name, key, target_key) } async fn run_s3_operations( region: Region, client: Client, bucket_name: String, file_name: String, key: String, target_key: String, ) -> Result<(), Error> { s3_service::create_bucket(&client, &bucket_name, region.as_ref()).await?; s3_service::upload_object(&client, &bucket_name, &file_name, &key).await?; let _object = s3_service::download_object(&client, &bucket_name, &key).await; s3_service::copy_object(&client, &bucket_name, &key, &target_key).await?; s3_service::list_objects(&client, &bucket_name).await?; s3_service::delete_objects(&client, &bucket_name).await?; s3_service::delete_bucket(&client, &bucket_name).await?; Ok(()) }

Una crate libreria con operazioni comuni chiamate dal binario.

use aws_sdk_s3::model::{ BucketLocationConstraint, CreateBucketConfiguration, Delete, ObjectIdentifier, }; use aws_sdk_s3::output::{GetObjectOutput, ListObjectsV2Output}; use aws_sdk_s3::types::ByteStream; use aws_sdk_s3::{Client, Error}; use std::path::Path; use std::str; pub async fn delete_bucket(client: &Client, bucket_name: &str) -> Result<(), Error> { client.delete_bucket().bucket(bucket_name).send().await?; println!("Bucket deleted"); Ok(()) } pub async fn delete_objects(client: &Client, bucket_name: &str) -> Result<(), Error> { let objects = client.list_objects_v2().bucket(bucket_name).send().await?; let mut delete_objects: Vec<ObjectIdentifier> = vec![]; for obj in objects.contents().unwrap_or_default() { let obj_id = ObjectIdentifier::builder() .set_key(Some(obj.key().unwrap().to_string())) .build(); delete_objects.push(obj_id); } client .delete_objects() .bucket(bucket_name) .delete(Delete::builder().set_objects(Some(delete_objects)).build()) .send() .await?; let objects: ListObjectsV2Output = client.list_objects_v2().bucket(bucket_name).send().await?; match objects.key_count { 0 => Ok(()), _ => Err(Error::Unhandled(Box::from( "There were still objects left in the bucket.", ))), } } pub async fn list_objects(client: &Client, bucket_name: &str) -> Result<(), Error> { let objects = client.list_objects_v2().bucket(bucket_name).send().await?; println!("Objects in bucket:"); for obj in objects.contents().unwrap_or_default() { println!("{:?}", obj.key().unwrap()); } Ok(()) } pub async fn copy_object( client: &Client, bucket_name: &str, object_key: &str, target_key: &str, ) -> Result<(), Error> { let mut source_bucket_and_object: String = "".to_owned(); source_bucket_and_object.push_str(bucket_name); source_bucket_and_object.push('/'); source_bucket_and_object.push_str(object_key); client .copy_object() .copy_source(source_bucket_and_object) .bucket(bucket_name) .key(target_key) .send() .await?; Ok(()) } pub async fn download_object(client: &Client, bucket_name: &str, key: &str) -> GetObjectOutput { let resp = client .get_object() .bucket(bucket_name) .key(key) .send() .await; resp.unwrap() } pub async fn upload_object( client: &Client, bucket_name: &str, file_name: &str, key: &str, ) -> Result<(), Error> { let body = ByteStream::from_path(Path::new(file_name)).await; client .put_object() .bucket(bucket_name) .key(key) .body(body.unwrap()) .send() .await?; println!("Uploaded file: {}", file_name); Ok(()) } pub async fn create_bucket(client: &Client, bucket_name: &str, region: &str) -> Result<(), Error> { let constraint = BucketLocationConstraint::from(region); let cfg = CreateBucketConfiguration::builder() .location_constraint(constraint) .build(); client .create_bucket() .create_bucket_configuration(cfg) .bucket(bucket_name) .send() .await?; println!("Creating bucket named: {bucket_name}"); Ok(()) }
Swift
SDK per Swift
Nota

Si tratta di una documentazione di pre-rilascio di un SDK nella versione di anteprima. ed è soggetta a modifiche.

Nota

Ulteriori informazioni su GitHub. Trova l'esempio completo e scopri di più sulla configurazione e l'esecuzione nel Repository di esempi di codice AWS.

Una classe Swift che gestisce le chiamate all'SDK per Swift.

import Foundation import AWSS3 import ClientRuntime import AWSClientRuntime /// A class containing all the code that interacts with the AWS SDK for Swift. public class ServiceHandler { let client: S3Client /// Initialize and return a new ``ServiceHandler`` object, which is used to drive the AWS calls /// used for the example. /// /// - Returns: A new ``ServiceHandler`` object, ready to be called to /// execute AWS operations. public init() async { do { client = try await S3Client() } catch { print("ERROR: ", dump(error, name: "Initializing s3 client")) exit(1) } } /// Create a new user given the specified name. /// /// - Parameters: /// - name: Name of the bucket to create. /// Throws an exception if an error occurs. public func createBucket(name: String) async throws { let config = S3ClientTypes.CreateBucketConfiguration( locationConstraint: .usEast2 ) let input = CreateBucketInput( bucket: name, createBucketConfiguration: config ) _ = try await client.createBucket(input: input) } /// Delete a bucket. /// - Parameter name: Name of the bucket to delete. public func deleteBucket(name: String) async throws { let input = DeleteBucketInput( bucket: name ) _ = try await client.deleteBucket(input: input) } /// Upload a file from local storage to the bucket. /// - Parameters: /// - bucket: Name of the bucket to upload the file to. /// - key: Name of the file to create. /// - file: Path name of the file to upload. public func uploadFile(bucket: String, key: String, file: String) async throws { let fileUrl = URL(fileURLWithPath: file) let fileData = try Data(contentsOf: fileUrl) let dataStream = ByteStream.from(data: fileData) let input = PutObjectInput( body: dataStream, bucket: bucket, key: key ) _ = try await client.putObject(input: input) } /// Create a file in the specified bucket with the given name. The new /// file's contents are uploaded from a `Data` object. /// /// - Parameters: /// - bucket: Name of the bucket to create a file in. /// - key: Name of the file to create. /// - data: A `Data` object to write into the new file. public func createFile(bucket: String, key: String, withData data: Data) async throws { let dataStream = ByteStream.from(data: data) let input = PutObjectInput( body: dataStream, bucket: bucket, key: key ) _ = try await client.putObject(input: input) } /// Download the named file to the given directory on the local device. /// /// - Parameters: /// - bucket: Name of the bucket that contains the file to be copied. /// - key: The name of the file to copy from the bucket. /// - to: The path of the directory on the local device where you want to /// download the file. public func downloadFile(bucket: String, key: String, to: String) async throws { let fileUrl = URL(fileURLWithPath: to).appendingPathComponent(key) let input = GetObjectInput( bucket: bucket, key: key ) let output = try await client.getObject(input: input) // Get the data stream object. Return immediately if there isn't one. guard let body = output.body else { return } let data = body.toBytes().toData() try data.write(to: fileUrl) } /// Read the specified file from the given S3 bucket into a Swift /// `Data` object. /// /// - Parameters: /// - bucket: Name of the bucket containing the file to read. /// - key: Name of the file within the bucket to read. /// /// - Returns: A `Data` object containing the complete file data. public func readFile(bucket: String, key: String) async throws -> Data { let input = GetObjectInput( bucket: bucket, key: key ) let output = try await client.getObject(input: input) // Get the stream and return its contents in a `Data` object. If // there is no stream, return an empty `Data` object instead. guard let body = output.body else { return "".data(using: .utf8)! } let data = body.toBytes().toData() return data } /// Copy a file from one bucket to another. /// /// - Parameters: /// - sourceBucket: Name of the bucket containing the source file. /// - name: Name of the source file. /// - destBucket: Name of the bucket to copy the file into. public func copyFile(from sourceBucket: String, name: String, to destBucket: String) async throws { let srcUrl = ("\(sourceBucket)/\(name)").addingPercentEncoding(withAllowedCharacters: .urlPathAllowed) let input = CopyObjectInput( bucket: destBucket, copySource: srcUrl, key: name ) _ = try await client.copyObject(input: input) } /// Deletes the specified file from Amazon S3. /// /// - Parameters: /// - bucket: Name of the bucket containing the file to delete. /// - key: Name of the file to delete. /// public func deleteFile(bucket: String, key: String) async throws { let input = DeleteObjectInput( bucket: bucket, key: key ) do { _ = try await client.deleteObject(input: input) } catch { throw error } } /// Returns an array of strings, each naming one file in the /// specified bucket. /// /// - Parameter bucket: Name of the bucket to get a file listing for. /// - Returns: An array of `String` objects, each giving the name of /// one file contained in the bucket. public func listBucketFiles(bucket: String) async throws -> [String] { let input = ListObjectsV2Input( bucket: bucket ) let output = try await client.listObjectsV2(input: input) var names: [String] = [] guard let objList = output.contents else { return [] } for obj in objList { if let objName = obj.key { names.append(objName) } } return names } }

Un programma Swift a riga di comando per gestire le chiamate SDK.

import Foundation import ServiceHandler import ArgumentParser /// The command-line arguments and options available for this /// example command. struct ExampleCommand: ParsableCommand { @Argument(help: "Name of the S3 bucket to create") var bucketName: String @Argument(help: "Pathname of the file to upload to the S3 bucket") var uploadSource: String @Argument(help: "The name (key) to give the file in the S3 bucket") var objName: String @Argument(help: "S3 bucket to copy the object to") var destBucket: String @Argument(help: "Directory where you want to download the file from the S3 bucket") var downloadDir: String static var configuration = CommandConfiguration( commandName: "s3-basics", abstract: "Demonstrates a series of basic AWS S3 functions.", discussion: """ Performs the following Amazon S3 commands: * `CreateBucket` * `PutObject` * `GetObject` * `CopyObject` * `ListObjects` * `DeleteObjects` * `DeleteBucket` """ ) /// Called by ``main()`` to do the actual running of the AWS /// example. func runAsync() async throws { let serviceHandler = await ServiceHandler() // 1. Create the bucket. print("Creating the bucket \(bucketName)...") try await serviceHandler.createBucket(name: bucketName) // 2. Upload a file to the bucket. print("Uploading the file \(uploadSource)...") try await serviceHandler.uploadFile(bucket: bucketName, key: objName, file: uploadSource) // 3. Download the file. print("Downloading the file \(objName) to \(downloadDir)...") try await serviceHandler.downloadFile(bucket: bucketName, key: objName, to: downloadDir) // 4. Copy the file to another bucket. print("Copying the file to the bucket \(destBucket)...") try await serviceHandler.copyFile(from: bucketName, name: objName, to: destBucket) // 5. List the contents of the bucket. print("Getting a list of the files in the bucket \(bucketName)") let fileList = try await serviceHandler.listBucketFiles(bucket: bucketName) let numFiles = fileList.count if numFiles != 0 { print("\(numFiles) file\((numFiles > 1) ? "s" : "") in bucket \(bucketName):") for name in fileList { print(" \(name)") } } else { print("No files found in bucket \(bucketName)") } // 6. Delete the objects from the bucket. print("Deleting the file \(objName) from the bucket \(bucketName)...") try await serviceHandler.deleteFile(bucket: bucketName, key: objName) print("Deleting the file \(objName) from the bucket \(destBucket)...") try await serviceHandler.deleteFile(bucket: destBucket, key: objName) // 7. Delete the bucket. print("Deleting the bucket \(bucketName)...") try await serviceHandler.deleteBucket(name: bucketName) print("Done.") } } // // Main program entry point. // @main struct Main { static func main() async { let args = Array(CommandLine.arguments.dropFirst()) do { let command = try ExampleCommand.parse(args) try await command.runAsync() } catch { ExampleCommand.exit(withError: error) } } }

Per un elenco completo delle guide per gli sviluppatori di SDK AWS ed esempi di codice, consulta la sezione Utilizzo del servizio con un SDK AWS. Questo argomento include anche informazioni su come iniziare e dettagli sulle versioni precedenti dell'SDK.