S3 バケットにファイルをアップロードするには、TransferUtility
クラスを使用します。ファイルからデータをアップロードする場合は、オブジェクトのキー名を指定する必要があります。指定しないと、API ではキー名としてファイル名を使用します。ストリームからデータをアップロードする場合は、オブジェクトのキー名を指定する必要があります。
高度なアップロードオプション (パートのサイズ、複数のパーツを同時にアップロードする際のスレッド数、メタデータ、ストレージクラス、ACL など) を設定するには、TransferUtilityUploadRequest
クラスを使用します。
次の C# の例では、ファイルを複数のパートに分割して Amazon S3 バケットにアップロードします。さまざまな TransferUtility.Upload
オーバーロードを使用してファイルをアップロードする方法を示します。後続のアップロード呼び出しが行われるたびに、前のアップロードが置き換えられます。この例と AWS
SDK for .NET の特定のバージョンとの互換性、および作業サンプルを作成してテストする手順の詳細については、「Amazon S3 .NET コード例の実行」を参照してください。
using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadFileMPUHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string keyName = "*** provide a name for the uploaded object ***";
private const string filePath = "*** provide the full path name of the file to upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
UploadFileAsync().Wait();
}
private static async Task UploadFileAsync()
{
try
{
var fileTransferUtility =
new TransferUtility(s3Client);
// Option 1. Upload a file. The file name is used as the object key name.
await fileTransferUtility.UploadAsync(filePath, bucketName);
Console.WriteLine("Upload 1 completed");
// Option 2. Specify object key name explicitly.
await fileTransferUtility.UploadAsync(filePath, bucketName, keyName);
Console.WriteLine("Upload 2 completed");
// Option 3. Upload data from a type of System.IO.Stream.
using (var fileToUpload =
new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
await fileTransferUtility.UploadAsync(fileToUpload,
bucketName, keyName);
}
Console.WriteLine("Upload 3 completed");
// Option 4. Specify advanced settings.
var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = bucketName,
FilePath = filePath,
StorageClass = S3StorageClass.StandardInfrequentAccess,
PartSize = 6291456, // 6 MB.
Key = keyName,
CannedACL = S3CannedACL.PublicRead
};
fileTransferUtilityRequest.Metadata.Add("param1", "Value1");
fileTransferUtilityRequest.Metadata.Add("param2", "Value2");
await fileTransferUtility.UploadAsync(fileTransferUtilityRequest);
Console.WriteLine("Upload 4 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
}
}
}
}
このトピックでは、マルチパートファイルのアップロードの AWS SDK for PHP からの上位の Aws\S3\Model\MultipartUpload\UploadBuilder
クラスを使用する方法について説明します。このトピックでは、「AWS SDK for PHP の使用と PHP の実行例」の説明に既に従っていて、AWS SDK for PHP が正しくインストールされていることを前提としています。
以下の PHP サンプルは、ファイルを Amazon S3 バケットにアップロードします。この例では、MultipartUploader
オブジェクトのパラメータを設定する方法を示します。
PHP 例の実行については、このガイド内の「PHP サンプルの実行」を参照してください。
require 'vendor/autoload.php';
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;
$bucket = '*** Your Bucket Name ***';
$keyname = '*** Your Object Key ***';
$s3 = new S3Client([
'version' => 'latest',
'region' => 'us-east-1'
]);
// Prepare the upload parameters.
$uploader = new MultipartUploader($s3, '/path/to/large/file.zip', [
'bucket' => $bucket,
'key' => $keyname
]);
// Perform the upload.
try {
$result = $uploader->upload();
echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL;
} catch (MultipartUploadException $e) {
echo $e->getMessage() . PHP_EOL;
}
次の例では、高レベルマルチパートアップロード Python API (TransferManager
クラス) を使用して、オブジェクトをロードします。
"""
Use Boto 3 managed file transfers to manage multipart uploads to and downloads
from an Amazon S3 bucket.
When the file to transfer is larger than the specified threshold, the transfer
manager automatically uses multipart uploads or downloads. This demonstration
shows how to use several of the available transfer manager settings and reports
thread usage and time to transfer.
"""
import sys
import threading
import boto3
from boto3.s3.transfer import TransferConfig
MB = 1024 * 1024
s3 = boto3.resource('s3')
class TransferCallback:
"""
Handle callbacks from the transfer manager.
The transfer manager periodically calls the __call__ method throughout
the upload and download process so that it can take action, such as
displaying progress to the user and collecting data about the transfer.
"""
def __init__(self, target_size):
self._target_size = target_size
self._total_transferred = 0
self._lock = threading.Lock()
self.thread_info = {}
def __call__(self, bytes_transferred):
"""
The callback method that is called by the transfer manager.
Display progress during file transfer and collect per-thread transfer
data. This method can be called by multiple threads, so shared instance
data is protected by a thread lock.
"""
thread = threading.current_thread()
with self._lock:
self._total_transferred += bytes_transferred
if thread.ident not in self.thread_info.keys():
self.thread_info[thread.ident] = bytes_transferred
else:
self.thread_info[thread.ident] += bytes_transferred
target = self._target_size * MB
sys.stdout.write(
f"\r{self._total_transferred} of {target} transferred "
f"({(self._total_transferred / target) * 100:.2f}%).")
sys.stdout.flush()
def upload_with_default_configuration(local_file_path, bucket_name,
object_key, file_size_mb):
"""
Upload a file from a local folder to an Amazon S3 bucket, using the default
configuration.
"""
transfer_callback = TransferCallback(file_size_mb)
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
Callback=transfer_callback)
return transfer_callback.thread_info
def upload_with_chunksize_and_meta(local_file_path, bucket_name, object_key,
file_size_mb, metadata=None):
"""
Upload a file from a local folder to an Amazon S3 bucket, setting a
multipart chunk size and adding metadata to the Amazon S3 object.
The multipart chunk size controls the size of the chunks of data that are
sent in the request. A smaller chunk size typically results in the transfer
manager using more threads for the upload.
The metadata is a set of key-value pairs that are stored with the object
in Amazon S3.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(multipart_chunksize=1 * MB)
extra_args = {'Metadata': metadata} if metadata else None
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
Config=config,
ExtraArgs=extra_args,
Callback=transfer_callback)
return transfer_callback.thread_info
def upload_with_high_threshold(local_file_path, bucket_name, object_key,
file_size_mb):
"""
Upload a file from a local folder to an Amazon S3 bucket, setting a
multipart threshold larger than the size of the file.
Setting a multipart threshold larger than the size of the file results
in the transfer manager sending the file as a standard upload instead of
a multipart upload.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB)
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
Config=config,
Callback=transfer_callback)
return transfer_callback.thread_info
def upload_with_sse(local_file_path, bucket_name, object_key,
file_size_mb, sse_key=None):
"""
Upload a file from a local folder to an Amazon S3 bucket, adding server-side
encryption with customer-provided encryption keys to the object.
When this kind of encryption is specified, Amazon S3 encrypts the object
at rest and allows downloads only when the expected encryption key is
provided in the download request.
"""
transfer_callback = TransferCallback(file_size_mb)
if sse_key:
extra_args = {
'SSECustomerAlgorithm': 'AES256',
'SSECustomerKey': sse_key}
else:
extra_args = None
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
ExtraArgs=extra_args,
Callback=transfer_callback)
return transfer_callback.thread_info
def download_with_default_configuration(bucket_name, object_key,
download_file_path, file_size_mb):
"""
Download a file from an Amazon S3 bucket to a local folder, using the
default configuration.
"""
transfer_callback = TransferCallback(file_size_mb)
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
Callback=transfer_callback)
return transfer_callback.thread_info
def download_with_single_thread(bucket_name, object_key,
download_file_path, file_size_mb):
"""
Download a file from an Amazon S3 bucket to a local folder, using a
single thread.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(use_threads=False)
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
Config=config,
Callback=transfer_callback)
return transfer_callback.thread_info
def download_with_high_threshold(bucket_name, object_key,
download_file_path, file_size_mb):
"""
Download a file from an Amazon S3 bucket to a local folder, setting a
multipart threshold larger than the size of the file.
Setting a multipart threshold larger than the size of the file results
in the transfer manager sending the file as a standard download instead
of a multipart download.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB)
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
Config=config,
Callback=transfer_callback)
return transfer_callback.thread_info
def download_with_sse(bucket_name, object_key, download_file_path,
file_size_mb, sse_key):
"""
Download a file from an Amazon S3 bucket to a local folder, adding a
customer-provided encryption key to the request.
When this kind of encryption is specified, Amazon S3 encrypts the object
at rest and allows downloads only when the expected encryption key is
provided in the download request.
"""
transfer_callback = TransferCallback(file_size_mb)
if sse_key:
extra_args = {
'SSECustomerAlgorithm': 'AES256',
'SSECustomerKey': sse_key}
else:
extra_args = None
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
ExtraArgs=extra_args,
Callback=transfer_callback)
return transfer_callback.thread_info