- Java
-
Im folgenden Beispiel wird ein Objekt mithilfe der High-Level-Java-API für mehrteilige Uploads (der TransferManager
-Klasse) hochgeladen. Anweisungen zum Erstellen und Testen eines funktionierenden Beispiels finden Sie unter Testen der Java-Codebeispiele für Amazon S3.
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
import com.amazonaws.services.s3.transfer.Upload;
import java.io.File;
public class HighLevelMultipartUpload {
public static void main(String[] args) throws Exception {
Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Object key ***";
String filePath = "*** Path for file to upload ***";
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();
// TransferManager processes all transfers asynchronously,
// so this call returns immediately.
Upload upload = tm.upload(bucketName, keyName, new File(filePath));
System.out.println("Object upload started");
// Optionally, wait for the upload to finish before continuing.
upload.waitForCompletion();
System.out.println("Object upload complete");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}
- .NET
-
Um eine Datei in einen S3-Bucket hochzuladen, verwenden Sie die Klasse TransferUtility
. Beim Hochladen von Daten aus einer Datei müssen Sie den Schlüsselnamen des Objekts angeben. Andernfalls verwendet die API den Dateinamen für den Schlüsselnamen. Beim Hochladen von Daten aus einem Stream müssen Sie den Schlüsselnamen des Objekts angeben.
Um fortschrittliche Upload-Optionen festzulegen – beispielsweise die Größe des Teil-Uploads, die Anzahl der Threads bei gleichzeitigem Hochladen von Upload-Teilen, Metadaten, die Speicherklasse oder die ACL –, verwenden Sie die Klasse TransferUtilityUploadRequest
.
Im folgenden C#-Beispiel wird eine Datei in mehreren Teilen in einen Amazon-S3-Bucket hochgeladen. Es veranschaulicht, wie Sie zahlreiche TransferUtility.Upload
-Überladungen zum Hochladen einer Datei verwenden. Jeder nachfolgende Aufruf zum Hochladen ersetzt den vorherigen Upload. Informationen zur Kompatibilität des Beispiels mit einer bestimmten Version des AWS SDK for .NET und Anleitungen zum Erstellen und Testen eines funktionierenden Beispiels finden Sie unter Ausführen der .NET-Codebeispiele für Amazon S3.
using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadFileMPUHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string keyName = "*** provide a name for the uploaded object ***";
private const string filePath = "*** provide the full path name of the file to upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
UploadFileAsync().Wait();
}
private static async Task UploadFileAsync()
{
try
{
var fileTransferUtility =
new TransferUtility(s3Client);
// Option 1. Upload a file. The file name is used as the object key name.
await fileTransferUtility.UploadAsync(filePath, bucketName);
Console.WriteLine("Upload 1 completed");
// Option 2. Specify object key name explicitly.
await fileTransferUtility.UploadAsync(filePath, bucketName, keyName);
Console.WriteLine("Upload 2 completed");
// Option 3. Upload data from a type of System.IO.Stream.
using (var fileToUpload =
new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
await fileTransferUtility.UploadAsync(fileToUpload,
bucketName, keyName);
}
Console.WriteLine("Upload 3 completed");
// Option 4. Specify advanced settings.
var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = bucketName,
FilePath = filePath,
StorageClass = S3StorageClass.StandardInfrequentAccess,
PartSize = 6291456, // 6 MB.
Key = keyName,
CannedACL = S3CannedACL.PublicRead
};
fileTransferUtilityRequest.Metadata.Add("param1", "Value1");
fileTransferUtilityRequest.Metadata.Add("param2", "Value2");
await fileTransferUtility.UploadAsync(fileTransferUtilityRequest);
Console.WriteLine("Upload 4 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
}
}
}
}
- PHP
-
In diesem Thema wird die Verwendung der High-Level-Klasse Aws\S3\Model\MultipartUpload\UploadBuilder
aus dem AWS SDK for PHP für mehrteilige Datei-Uploads beschrieben. Es wird vorausgesetzt, dass Sie den Anleitungen für Verwenden von AWS SDK for PHP und Ausführen von PHP-Beispielen folgen und der AWS SDK for PHP ordnungsgemäß installiert ist.
Im folgenden PHP-Beispiel wird eine Datei in einen Amazon-S3-Bucket hochgeladen. Das Beispiel veranschaulicht, wie Sie Parameter für das MultipartUploader
-Objekt festlegen.
Weitere Informationen zur Ausführung der PHP-Beispiele in dieser Anleitung finden Sie unter PHP-Beispiele ausführen.
require 'vendor/autoload.php';
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;
$bucket = '*** Your Bucket Name ***';
$keyname = '*** Your Object Key ***';
$s3 = new S3Client([
'version' => 'latest',
'region' => 'us-east-1'
]);
// Prepare the upload parameters.
$uploader = new MultipartUploader($s3, '/path/to/large/file.zip', [
'bucket' => $bucket,
'key' => $keyname
]);
// Perform the upload.
try {
$result = $uploader->upload();
echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL;
} catch (MultipartUploadException $e) {
echo $e->getMessage() . PHP_EOL;
}
- Python
-
Im folgenden Beispiel wird ein Objekt mithilfe der High-Level-Phyton-API für mehrteilige Uploads (der TransferManager
-Klasse) hochgeladen.
import sys
import threading
import boto3
from boto3.s3.transfer import TransferConfig
MB = 1024 * 1024
s3 = boto3.resource('s3')
class TransferCallback:
"""
Handle callbacks from the transfer manager.
The transfer manager periodically calls the __call__ method throughout
the upload and download process so that it can take action, such as
displaying progress to the user and collecting data about the transfer.
"""
def __init__(self, target_size):
self._target_size = target_size
self._total_transferred = 0
self._lock = threading.Lock()
self.thread_info = {}
def __call__(self, bytes_transferred):
"""
The callback method that is called by the transfer manager.
Display progress during file transfer and collect per-thread transfer
data. This method can be called by multiple threads, so shared instance
data is protected by a thread lock.
"""
thread = threading.current_thread()
with self._lock:
self._total_transferred += bytes_transferred
if thread.ident not in self.thread_info.keys():
self.thread_info[thread.ident] = bytes_transferred
else:
self.thread_info[thread.ident] += bytes_transferred
target = self._target_size * MB
sys.stdout.write(
f"\r{self._total_transferred} of {target} transferred "
f"({(self._total_transferred / target) * 100:.2f}%).")
sys.stdout.flush()
def upload_with_default_configuration(local_file_path, bucket_name,
object_key, file_size_mb):
"""
Upload a file from a local folder to an Amazon S3 bucket, using the default
configuration.
"""
transfer_callback = TransferCallback(file_size_mb)
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
Callback=transfer_callback)
return transfer_callback.thread_info
def upload_with_chunksize_and_meta(local_file_path, bucket_name, object_key,
file_size_mb, metadata=None):
"""
Upload a file from a local folder to an Amazon S3 bucket, setting a
multipart chunk size and adding metadata to the Amazon S3 object.
The multipart chunk size controls the size of the chunks of data that are
sent in the request. A smaller chunk size typically results in the transfer
manager using more threads for the upload.
The metadata is a set of key-value pairs that are stored with the object
in Amazon S3.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(multipart_chunksize=1 * MB)
extra_args = {'Metadata': metadata} if metadata else None
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
Config=config,
ExtraArgs=extra_args,
Callback=transfer_callback)
return transfer_callback.thread_info
def upload_with_high_threshold(local_file_path, bucket_name, object_key,
file_size_mb):
"""
Upload a file from a local folder to an Amazon S3 bucket, setting a
multipart threshold larger than the size of the file.
Setting a multipart threshold larger than the size of the file results
in the transfer manager sending the file as a standard upload instead of
a multipart upload.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB)
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
Config=config,
Callback=transfer_callback)
return transfer_callback.thread_info
def upload_with_sse(local_file_path, bucket_name, object_key,
file_size_mb, sse_key=None):
"""
Upload a file from a local folder to an Amazon S3 bucket, adding server-side
encryption with customer-provided encryption keys to the object.
When this kind of encryption is specified, Amazon S3 encrypts the object
at rest and allows downloads only when the expected encryption key is
provided in the download request.
"""
transfer_callback = TransferCallback(file_size_mb)
if sse_key:
extra_args = {
'SSECustomerAlgorithm': 'AES256',
'SSECustomerKey': sse_key}
else:
extra_args = None
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
ExtraArgs=extra_args,
Callback=transfer_callback)
return transfer_callback.thread_info
def download_with_default_configuration(bucket_name, object_key,
download_file_path, file_size_mb):
"""
Download a file from an Amazon S3 bucket to a local folder, using the
default configuration.
"""
transfer_callback = TransferCallback(file_size_mb)
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
Callback=transfer_callback)
return transfer_callback.thread_info
def download_with_single_thread(bucket_name, object_key,
download_file_path, file_size_mb):
"""
Download a file from an Amazon S3 bucket to a local folder, using a
single thread.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(use_threads=False)
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
Config=config,
Callback=transfer_callback)
return transfer_callback.thread_info
def download_with_high_threshold(bucket_name, object_key,
download_file_path, file_size_mb):
"""
Download a file from an Amazon S3 bucket to a local folder, setting a
multipart threshold larger than the size of the file.
Setting a multipart threshold larger than the size of the file results
in the transfer manager sending the file as a standard download instead
of a multipart download.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB)
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
Config=config,
Callback=transfer_callback)
return transfer_callback.thread_info
def download_with_sse(bucket_name, object_key, download_file_path,
file_size_mb, sse_key):
"""
Download a file from an Amazon S3 bucket to a local folder, adding a
customer-provided encryption key to the request.
When this kind of encryption is specified, Amazon S3 encrypts the object
at rest and allows downloads only when the expected encryption key is
provided in the download request.
"""
transfer_callback = TransferCallback(file_size_mb)
if sse_key:
extra_args = {
'SSECustomerAlgorithm': 'AES256',
'SSECustomerKey': sse_key}
else:
extra_args = None
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
ExtraArgs=extra_args,
Callback=transfer_callback)
return transfer_callback.thread_info