使用分段上传操作上传对象 - Amazon Simple Storage Service

使用分段上传操作上传对象

您可以使用分段上传以编程方式将单个对象上传到 Amazon S3。每个对象作为一组分段进行上传。每个分段都是对象数据的连续部分。您可以独立上传以及按任意顺序上传这些对象分段。如果任意分段传输失败,可以重新传输该分段且不会影响其他分段。上传完所有的对象分段后,Amazon S3 将汇集这些分段并创建对象。

有关使用分段上传以及额外的校验和来上传对象的端到端过程,请参阅教程:通过分段上传来上传对象并验证其数据完整性

下一节介绍如何通过 AWS Command Line Interface和 AWS SDK 使用分段上传。

您可以将任何类型的文件上传至 S3 存储桶,包括图像、备份、数据、电影等。可使用 Amazon S3 控制台上传的文件的最大大小为 160 GB。要上传大于 160GB 的文件,请使用 AWS Command Line Interface(AWS CLI)、AWS SDK 或 Amazon S3 REST API。

有关通过 AWS Management Console上传对象的说明,请参阅上传对象

以下内容介绍使用 AWS CLI 进行分段上传的 Amazon S3 操作。

Amazon Simple Storage Service API 参考的下面几节描述了适用于分段上传的 REST API。

一些 AWS SDK 公开了一个高级别 API,该 API 通过将完成分段上传所需的不同 API 操作组合成单个操作,来简化分段上传。有关更多信息,请参阅 使用分段上传来上传和复制对象

如果您需要暂停并恢复分段上传、在上传期间改变分段大小,或者事先不知道数据大小,请使用低级别 API 方法。用于分段上传的低级别 API 方法提供了额外功能,有关更多信息,请参阅使用 AWS SDK(低级别 API)

.NET

要将文件上传到 S3 存储桶,请使用 TransferUtility 类。在从文件上传数据时,您必须提供对象的键名。如果未提供,该 API 将使用文件名作为键名。在从流上传数据时,您必须提供对象的键名。

要设置高级上传选项(如段大小、并发上传段时的线程数、元数据、存储类或 ACL),请使用 TransferUtilityUploadRequest 类。

注意

如果您将流用作数据源,TransferUtility 类不会执行并发上传。

以下 C# 示例将文件分段上传到 Amazon S3 存储桶。它说明如何使用各种 TransferUtility.Upload 重载来上传文件。每个对上传的后续调用都将替换先前的上传。有关设置和运行代码示例的信息,请参阅《适用于 .NET 的 AWS SDK 开发人员指南》中的适用于 .NET 的 AWS SDK 入门

using Amazon; using Amazon.S3; using Amazon.S3.Transfer; using System; using System.IO; using System.Threading.Tasks; namespace Amazon.DocSamples.S3 { class UploadFileMPUHighLevelAPITest { private const string bucketName = "*** provide bucket name ***"; private const string keyName = "*** provide a name for the uploaded object ***"; private const string filePath = "*** provide the full path name of the file to upload ***"; // Specify your bucket region (an example region is shown). private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2; private static IAmazonS3 s3Client; public static void Main() { s3Client = new AmazonS3Client(bucketRegion); UploadFileAsync().Wait(); } private static async Task UploadFileAsync() { try { var fileTransferUtility = new TransferUtility(s3Client); // Option 1. Upload a file. The file name is used as the object key name. await fileTransferUtility.UploadAsync(filePath, bucketName); Console.WriteLine("Upload 1 completed"); // Option 2. Specify object key name explicitly. await fileTransferUtility.UploadAsync(filePath, bucketName, keyName); Console.WriteLine("Upload 2 completed"); // Option 3. Upload data from a type of System.IO.Stream. using (var fileToUpload = new FileStream(filePath, FileMode.Open, FileAccess.Read)) { await fileTransferUtility.UploadAsync(fileToUpload, bucketName, keyName); } Console.WriteLine("Upload 3 completed"); // Option 4. Specify advanced settings. var fileTransferUtilityRequest = new TransferUtilityUploadRequest { BucketName = bucketName, FilePath = filePath, StorageClass = S3StorageClass.StandardInfrequentAccess, PartSize = 6291456, // 6 MB. Key = keyName, CannedACL = S3CannedACL.PublicRead }; fileTransferUtilityRequest.Metadata.Add("param1", "Value1"); fileTransferUtilityRequest.Metadata.Add("param2", "Value2"); await fileTransferUtility.UploadAsync(fileTransferUtilityRequest); Console.WriteLine("Upload 4 completed"); } catch (AmazonS3Exception e) { Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message); } catch (Exception e) { Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message); } } } }
JavaScript

上传大文件。

import { CreateMultipartUploadCommand, UploadPartCommand, CompleteMultipartUploadCommand, AbortMultipartUploadCommand, S3Client, } from "@aws-sdk/client-s3"; const twentyFiveMB = 25 * 1024 * 1024; export const createString = (size = twentyFiveMB) => { return "x".repeat(size); }; export const main = async () => { const s3Client = new S3Client({}); const bucketName = "test-bucket"; const key = "multipart.txt"; const str = createString(); const buffer = Buffer.from(str, "utf8"); let uploadId; try { const multipartUpload = await s3Client.send( new CreateMultipartUploadCommand({ Bucket: bucketName, Key: key, }), ); uploadId = multipartUpload.UploadId; const uploadPromises = []; // Multipart uploads require a minimum size of 5 MB per part. const partSize = Math.ceil(buffer.length / 5); // Upload each part. for (let i = 0; i < 5; i++) { const start = i * partSize; const end = start + partSize; uploadPromises.push( s3Client .send( new UploadPartCommand({ Bucket: bucketName, Key: key, UploadId: uploadId, Body: buffer.subarray(start, end), PartNumber: i + 1, }), ) .then((d) => { console.log("Part", i + 1, "uploaded"); return d; }), ); } const uploadResults = await Promise.all(uploadPromises); return await s3Client.send( new CompleteMultipartUploadCommand({ Bucket: bucketName, Key: key, UploadId: uploadId, MultipartUpload: { Parts: uploadResults.map(({ ETag }, i) => ({ ETag, PartNumber: i + 1, })), }, }), ); // Verify the output by downloading the file from the Amazon Simple Storage Service (Amazon S3) console. // Because the output is a 25 MB string, text editors might struggle to open the file. } catch (err) { console.error(err); if (uploadId) { const abortCommand = new AbortMultipartUploadCommand({ Bucket: bucketName, Key: key, UploadId: uploadId, }); await s3Client.send(abortCommand); } } };

下载大文件。

import { GetObjectCommand, S3Client } from "@aws-sdk/client-s3"; import { createWriteStream } from "fs"; const s3Client = new S3Client({}); const oneMB = 1024 * 1024; export const getObjectRange = ({ bucket, key, start, end }) => { const command = new GetObjectCommand({ Bucket: bucket, Key: key, Range: `bytes=${start}-${end}`, }); return s3Client.send(command); }; /** * @param {string | undefined} contentRange */ export const getRangeAndLength = (contentRange) => { const [range, length] = contentRange.split("/"); const [start, end] = range.split("-"); return { start: parseInt(start), end: parseInt(end), length: parseInt(length), }; }; export const isComplete = ({ end, length }) => end === length - 1; // When downloading a large file, you might want to break it down into // smaller pieces. Amazon S3 accepts a Range header to specify the start // and end of the byte range to be downloaded. const downloadInChunks = async ({ bucket, key }) => { const writeStream = createWriteStream( fileURLToPath(new URL(`./${key}`, import.meta.url)), ).on("error", (err) => console.error(err)); let rangeAndLength = { start: -1, end: -1, length: -1 }; while (!isComplete(rangeAndLength)) { const { end } = rangeAndLength; const nextRange = { start: end + 1, end: end + oneMB }; console.log(`Downloading bytes ${nextRange.start} to ${nextRange.end}`); const { ContentRange, Body } = await getObjectRange({ bucket, key, ...nextRange, }); writeStream.write(await Body.transformToByteArray()); rangeAndLength = getRangeAndLength(ContentRange); } }; export const main = async () => { await downloadInChunks({ bucket: "my-cool-bucket", key: "my-cool-object.txt", }); };
Go

使用上传管理器上传大型对象,以将数据分成多个分段,然后同时上传。

// BucketBasics encapsulates the Amazon Simple Storage Service (Amazon S3) actions // used in the examples. // It contains S3Client, an Amazon S3 service client that is used to perform bucket // and object actions. type BucketBasics struct { S3Client *s3.Client }
// UploadLargeObject uses an upload manager to upload data to an object in a bucket. // The upload manager breaks large data into parts and uploads the parts concurrently. func (basics BucketBasics) UploadLargeObject(ctx context.Context, bucketName string, objectKey string, largeObject []byte) error { largeBuffer := bytes.NewReader(largeObject) var partMiBs int64 = 10 uploader := manager.NewUploader(basics.S3Client, func(u *manager.Uploader) { u.PartSize = partMiBs * 1024 * 1024 }) _, err := uploader.Upload(ctx, &s3.PutObjectInput{ Bucket: aws.String(bucketName), Key: aws.String(objectKey), Body: largeBuffer, }) if err != nil { log.Printf("Couldn't upload large object to %v:%v. Here's why: %v\n", bucketName, objectKey, err) } return err }

使用下载管理器下载大型对象,以分段获取数据并同时下载它们。

// DownloadLargeObject uses a download manager to download an object from a bucket. // The download manager gets the data in parts and writes them to a buffer until all of // the data has been downloaded. func (basics BucketBasics) DownloadLargeObject(ctx context.Context, bucketName string, objectKey string) ([]byte, error) { var partMiBs int64 = 10 downloader := manager.NewDownloader(basics.S3Client, func(d *manager.Downloader) { d.PartSize = partMiBs * 1024 * 1024 }) buffer := manager.NewWriteAtBuffer([]byte{}) _, err := downloader.Download(ctx, buffer, &s3.GetObjectInput{ Bucket: aws.String(bucketName), Key: aws.String(objectKey), }) if err != nil { log.Printf("Couldn't download large object from %v:%v. Here's why: %v\n", bucketName, objectKey, err) } return buffer.Bytes(), err }
PHP

本主题介绍如何使用 Aws\S3\Model\MultipartUpload\UploadBuilder 中的高级别 AWS SDK for PHP 类执行文件分段上传。有关适用于 Ruby 的 AWS 开发工具包 API 的更多信息,请转到适用于 Ruby 的 AWS 开发工具包 – 版本 2

以下 PHP 代码示例将文件上传到 Amazon S3 存储桶。该示例演示如何设置 MultipartUploader 对象的参数。

require 'vendor/autoload.php'; use Aws\Exception\MultipartUploadException; use Aws\S3\MultipartUploader; use Aws\S3\S3Client; $bucket = '*** Your Bucket Name ***'; $keyname = '*** Your Object Key ***'; $s3 = new S3Client([ 'version' => 'latest', 'region' => 'us-east-1' ]); // Prepare the upload parameters. $uploader = new MultipartUploader($s3, '/path/to/large/file.zip', [ 'bucket' => $bucket, 'key' => $keyname ]); // Perform the upload. try { $result = $uploader->upload(); echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL; } catch (MultipartUploadException $e) { echo $e->getMessage() . PHP_EOL; }
Python

以下示例使用高级别分段上传 Python API(TransferManager 类)上传对象。

import sys import threading import boto3 from boto3.s3.transfer import TransferConfig MB = 1024 * 1024 s3 = boto3.resource("s3") class TransferCallback: """ Handle callbacks from the transfer manager. The transfer manager periodically calls the __call__ method throughout the upload and download process so that it can take action, such as displaying progress to the user and collecting data about the transfer. """ def __init__(self, target_size): self._target_size = target_size self._total_transferred = 0 self._lock = threading.Lock() self.thread_info = {} def __call__(self, bytes_transferred): """ The callback method that is called by the transfer manager. Display progress during file transfer and collect per-thread transfer data. This method can be called by multiple threads, so shared instance data is protected by a thread lock. """ thread = threading.current_thread() with self._lock: self._total_transferred += bytes_transferred if thread.ident not in self.thread_info.keys(): self.thread_info[thread.ident] = bytes_transferred else: self.thread_info[thread.ident] += bytes_transferred target = self._target_size * MB sys.stdout.write( f"\r{self._total_transferred} of {target} transferred " f"({(self._total_transferred / target) * 100:.2f}%)." ) sys.stdout.flush() def upload_with_default_configuration( local_file_path, bucket_name, object_key, file_size_mb ): """ Upload a file from a local folder to an Amazon S3 bucket, using the default configuration. """ transfer_callback = TransferCallback(file_size_mb) s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Callback=transfer_callback ) return transfer_callback.thread_info def upload_with_chunksize_and_meta( local_file_path, bucket_name, object_key, file_size_mb, metadata=None ): """ Upload a file from a local folder to an Amazon S3 bucket, setting a multipart chunk size and adding metadata to the Amazon S3 object. The multipart chunk size controls the size of the chunks of data that are sent in the request. A smaller chunk size typically results in the transfer manager using more threads for the upload. The metadata is a set of key-value pairs that are stored with the object in Amazon S3. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_chunksize=1 * MB) extra_args = {"Metadata": metadata} if metadata else None s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Config=config, ExtraArgs=extra_args, Callback=transfer_callback, ) return transfer_callback.thread_info def upload_with_high_threshold(local_file_path, bucket_name, object_key, file_size_mb): """ Upload a file from a local folder to an Amazon S3 bucket, setting a multipart threshold larger than the size of the file. Setting a multipart threshold larger than the size of the file results in the transfer manager sending the file as a standard upload instead of a multipart upload. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB) s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Config=config, Callback=transfer_callback ) return transfer_callback.thread_info def upload_with_sse( local_file_path, bucket_name, object_key, file_size_mb, sse_key=None ): """ Upload a file from a local folder to an Amazon S3 bucket, adding server-side encryption with customer-provided encryption keys to the object. When this kind of encryption is specified, Amazon S3 encrypts the object at rest and allows downloads only when the expected encryption key is provided in the download request. """ transfer_callback = TransferCallback(file_size_mb) if sse_key: extra_args = {"SSECustomerAlgorithm": "AES256", "SSECustomerKey": sse_key} else: extra_args = None s3.Bucket(bucket_name).upload_file( local_file_path, object_key, ExtraArgs=extra_args, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_default_configuration( bucket_name, object_key, download_file_path, file_size_mb ): """ Download a file from an Amazon S3 bucket to a local folder, using the default configuration. """ transfer_callback = TransferCallback(file_size_mb) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_single_thread( bucket_name, object_key, download_file_path, file_size_mb ): """ Download a file from an Amazon S3 bucket to a local folder, using a single thread. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(use_threads=False) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Config=config, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_high_threshold( bucket_name, object_key, download_file_path, file_size_mb ): """ Download a file from an Amazon S3 bucket to a local folder, setting a multipart threshold larger than the size of the file. Setting a multipart threshold larger than the size of the file results in the transfer manager sending the file as a standard download instead of a multipart download. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Config=config, Callback=transfer_callback ) return transfer_callback.thread_info def download_with_sse( bucket_name, object_key, download_file_path, file_size_mb, sse_key ): """ Download a file from an Amazon S3 bucket to a local folder, adding a customer-provided encryption key to the request. When this kind of encryption is specified, Amazon S3 encrypts the object at rest and allows downloads only when the expected encryption key is provided in the download request. """ transfer_callback = TransferCallback(file_size_mb) if sse_key: extra_args = {"SSECustomerAlgorithm": "AES256", "SSECustomerKey": sse_key} else: extra_args = None s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, ExtraArgs=extra_args, Callback=transfer_callback ) return transfer_callback.thread_info

AWS SDK 公开了一个与 Amazon S3 REST API 非常相似的用于分段上传的低级别 API(请参阅使用分段上传来上传和复制对象)。当您需要暂停和恢复分段上传、在上传过程中更改部分大小或事先不知道上传数据的大小时,请使用低级别 API。当您没有这些需求时,请使用高级别 API(请参阅使用 AWS SDK(高级别 API))。

Java

以下示例演示如何使用低级别 Java 类上传文件。它将执行以下步骤:

  • 使用 AmazonS3Client.initiateMultipartUpload() 方法初始化分段上传,并传入 InitiateMultipartUploadRequest 对象。

  • 保存 AmazonS3Client.initiateMultipartUpload() 方法返回的上传 ID。您为随后的每个分段上传操作提供此上传 ID。

  • 上传对象的分段。对于每个分段,您将调用 AmazonS3Client.uploadPart() 方法。您使用 UploadPartRequest 对象提供分段上传信息。

  • 对于每个分段,在列表中保存来自 AmazonS3Client.uploadPart() 方法的响应的 ETag。您使用 ETag 值完成分段上传。

  • 调用 AmazonS3Client.completeMultipartUpload() 方法来完成分段上传。

有关创建和测试有效示例的说明,请参阅《AWS SDK for Java 开发人员指南》中的入门

import com.amazonaws.AmazonServiceException; import com.amazonaws.SdkClientException; import com.amazonaws.auth.profile.ProfileCredentialsProvider; import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.*; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.List; public class LowLevelMultipartUpload { public static void main(String[] args) throws IOException { Regions clientRegion = Regions.DEFAULT_REGION; String bucketName = "*** Bucket name ***"; String keyName = "*** Key name ***"; String filePath = "*** Path to file to upload ***"; File file = new File(filePath); long contentLength = file.length(); long partSize = 5 * 1024 * 1024; // Set part size to 5 MB. try { AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withRegion(clientRegion) .withCredentials(new ProfileCredentialsProvider()) .build(); // Create a list of ETag objects. You retrieve ETags for each object part // uploaded, // then, after each individual part has been uploaded, pass the list of ETags to // the request to complete the upload. List<PartETag> partETags = new ArrayList<PartETag>(); // Initiate the multipart upload. InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(bucketName, keyName); InitiateMultipartUploadResult initResponse = s3Client.initiateMultipartUpload(initRequest); // Upload the file parts. long filePosition = 0; for (int i = 1; filePosition < contentLength; i++) { // Because the last part could be less than 5 MB, adjust the part size as // needed. partSize = Math.min(partSize, (contentLength - filePosition)); // Create the request to upload a part. UploadPartRequest uploadRequest = new UploadPartRequest() .withBucketName(bucketName) .withKey(keyName) .withUploadId(initResponse.getUploadId()) .withPartNumber(i) .withFileOffset(filePosition) .withFile(file) .withPartSize(partSize); // Upload the part and add the response's ETag to our list. UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest); partETags.add(uploadResult.getPartETag()); filePosition += partSize; } // Complete the multipart upload. CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(bucketName, keyName, initResponse.getUploadId(), partETags); s3Client.completeMultipartUpload(compRequest); } catch (AmazonServiceException e) { // The call was transmitted successfully, but Amazon S3 couldn't process // it, so it returned an error response. e.printStackTrace(); } catch (SdkClientException e) { // Amazon S3 couldn't be contacted for a response, or the client // couldn't parse the response from Amazon S3. e.printStackTrace(); } } }
.NET

以下 C# 示例演示如何使用低级别 AWS SDK for .NET 分段上传 API 将文件上传到 S3 存储桶。有关 Amazon S3 分段上传的信息,请参阅 使用分段上传来上传和复制对象

注意

如果使用 AWS SDK for .NET API 上传大型对象,数据写入到请求流时可能出现超时。您可以使用 UploadPartRequest 设置显式超时。

以下 C# 示例使用低级别分段上传 API 将文件上传到 S3 存储桶。有关设置和运行代码示例的信息,请参阅《适用于 .NET 的 AWS SDK 开发人员指南》中的适用于 .NET 的 AWS SDK 入门

using Amazon; using Amazon.Runtime; using Amazon.S3; using Amazon.S3.Model; using System; using System.Collections.Generic; using System.IO; using System.Threading.Tasks; namespace Amazon.DocSamples.S3 { class UploadFileMPULowLevelAPITest { private const string bucketName = "*** provide bucket name ***"; private const string keyName = "*** provide a name for the uploaded object ***"; private const string filePath = "*** provide the full path name of the file to upload ***"; // Specify your bucket region (an example region is shown). private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2; private static IAmazonS3 s3Client; public static void Main() { s3Client = new AmazonS3Client(bucketRegion); Console.WriteLine("Uploading an object"); UploadObjectAsync().Wait(); } private static async Task UploadObjectAsync() { // Create list to store upload part responses. List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>(); // Setup information required to initiate the multipart upload. InitiateMultipartUploadRequest initiateRequest = new InitiateMultipartUploadRequest { BucketName = bucketName, Key = keyName }; // Initiate the upload. InitiateMultipartUploadResponse initResponse = await s3Client.InitiateMultipartUploadAsync(initiateRequest); // Upload parts. long contentLength = new FileInfo(filePath).Length; long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB try { Console.WriteLine("Uploading parts"); long filePosition = 0; for (int i = 1; filePosition < contentLength; i++) { UploadPartRequest uploadRequest = new UploadPartRequest { BucketName = bucketName, Key = keyName, UploadId = initResponse.UploadId, PartNumber = i, PartSize = partSize, FilePosition = filePosition, FilePath = filePath }; // Track upload progress. uploadRequest.StreamTransferProgress += new EventHandler<StreamTransferProgressArgs>(UploadPartProgressEventCallback); // Upload a part and add the response to our list. uploadResponses.Add(await s3Client.UploadPartAsync(uploadRequest)); filePosition += partSize; } // Setup to complete the upload. CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest { BucketName = bucketName, Key = keyName, UploadId = initResponse.UploadId }; completeRequest.AddPartETags(uploadResponses); // Complete the upload. CompleteMultipartUploadResponse completeUploadResponse = await s3Client.CompleteMultipartUploadAsync(completeRequest); } catch (Exception exception) { Console.WriteLine("An AmazonS3Exception was thrown: { 0}", exception.Message); // Abort the upload. AbortMultipartUploadRequest abortMPURequest = new AbortMultipartUploadRequest { BucketName = bucketName, Key = keyName, UploadId = initResponse.UploadId }; await s3Client.AbortMultipartUploadAsync(abortMPURequest); } } public static void UploadPartProgressEventCallback(object sender, StreamTransferProgressArgs e) { // Process event. Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes); } } }
PHP

本主题说明如何使用 AWS SDK for PHP 版本 3 中的低级别 uploadPart 方法来以分段形式上传文件。有关适用于 Ruby 的 AWS 开发工具包 API 的更多信息,请转到适用于 Ruby 的 AWS 开发工具包 – 版本 2

以下 PHP 示例使用低级别 PHP API 分段上传将文件上传到 Amazon S3 存储桶。

require 'vendor/autoload.php'; use Aws\S3\Exception\S3Exception; use Aws\S3\S3Client; $bucket = '*** Your Bucket Name ***'; $keyname = '*** Your Object Key ***'; $filename = '*** Path to and Name of the File to Upload ***'; $s3 = new S3Client([ 'version' => 'latest', 'region' => 'us-east-1' ]); $result = $s3->createMultipartUpload([ 'Bucket' => $bucket, 'Key' => $keyname, 'StorageClass' => 'REDUCED_REDUNDANCY', 'Metadata' => [ 'param1' => 'value 1', 'param2' => 'value 2', 'param3' => 'value 3' ] ]); $uploadId = $result['UploadId']; // Upload the file in parts. try { $file = fopen($filename, 'r'); $partNumber = 1; while (!feof($file)) { $result = $s3->uploadPart([ 'Bucket' => $bucket, 'Key' => $keyname, 'UploadId' => $uploadId, 'PartNumber' => $partNumber, 'Body' => fread($file, 5 * 1024 * 1024), ]); $parts['Parts'][$partNumber] = [ 'PartNumber' => $partNumber, 'ETag' => $result['ETag'], ]; $partNumber++; echo "Uploading part $partNumber of $filename." . PHP_EOL; } fclose($file); } catch (S3Exception $e) { $result = $s3->abortMultipartUpload([ 'Bucket' => $bucket, 'Key' => $keyname, 'UploadId' => $uploadId ]); echo "Upload of $filename failed." . PHP_EOL; } // Complete the multipart upload. $result = $s3->completeMultipartUpload([ 'Bucket' => $bucket, 'Key' => $keyname, 'UploadId' => $uploadId, 'MultipartUpload' => $parts, ]); $url = $result['Location']; echo "Uploaded $filename to $url." . PHP_EOL;

AWS SDK for Ruby 版本 3 支持两种 Amazon S3 分段上传方式。对于第一个选项,您可以使用托管文件上传。有关更多信息,请参阅 AWS 开发人员博客中的将文件上传至 Amazon S3。托管文件上传是建议用于将文件上传到存储桶的方法。它们提供以下优势:

  • 管理大于 15MB 的对象的分段上传。

  • 正确打开二进制模式的文件,规避编码问题。

  • 在并行上传较大对象的多个部分时,使用多个线程。

此外,您还可以直接使用以下分段上传客户端操作: