AWSDocAWS SDKGitHub サンプルリポジトリには、さらに多くの SDK サンプルがあります
翻訳は機械翻訳により提供されています。提供された翻訳内容と英語版の間で齟齬、不一致または矛盾がある場合、英語版が優先します。
SDK for Python (Boto3)
次のコード例は、と Amazon S3 を使用してを使用してアクションを実行する方法を示しています。AWS SDK for Python (Boto3)
「アクション」は、個々のサービス関数の呼び出し方法を示すコードの抜粋です。
「シナリオ」は、同じサービス内で複数の関数を呼び出して、特定のタスクを実行する方法を示すコード例です。
それぞれの例にはGitHub、へのリンクがあり、コンテキストでコードを設定および実行する方法についての説明が記載されています。
開始方法
次のコード例は、Amazon S3 の使用を開始する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 import boto3 def hello_s3(): """ Use the AWS SDK for Python (Boto3) to create an Amazon Simple Storage Service (Amazon S3) resource and list the buckets in your account. This example uses the default settings specified in your shared credentials and config files. """ s3_resource = boto3.resource('s3') print("Hello, Amazon S3! Let's list your buckets:") for bucket in s3_resource.buckets.all(): print(f"\t{bucket.name}") if __name__ == '__main__': hello_s3()
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいListBuckets。
-
アクション
次のコード例は、Cross-Origin Resource Sharing (COross-Origin) ルールを Amazon S3 バケットに追加する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def put_cors(self, cors_rules): """ Apply CORS rules to the bucket. CORS rules specify the HTTP actions that are allowed from other domains. :param cors_rules: The CORS rules to apply. """ try: self.bucket.Cors().put(CORSConfiguration={'CORSRules': cors_rules}) logger.info( "Put CORS rules %s for bucket '%s'.", cors_rules, self.bucket.name) except ClientError: logger.exception("Couldn't put CORS rules for bucket %s.", self.bucket.name) raise
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいPutBucketCors。
-
次のコード例は、Amazon S3 バケットにライフサイクル設定を追加する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def put_lifecycle_configuration(self, lifecycle_rules): """ Apply a lifecycle configuration to the bucket. The lifecycle configuration can be used to archive or delete the objects in the bucket according to specified parameters, such as a number of days. :param lifecycle_rules: The lifecycle rules to apply. """ try: self.bucket.LifecycleConfiguration().put( LifecycleConfiguration={'Rules': lifecycle_rules}) logger.info( "Put lifecycle rules %s for bucket '%s'.", lifecycle_rules, self.bucket.name) except ClientError: logger.exception( "Couldn't put lifecycle rules for bucket '%s'.", self.bucket.name) raise
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいPutBucketLifecycleConfiguration。
-
次のコード例は、Amazon S3 バケットにポリシーを追加する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def put_policy(self, policy): """ Apply a security policy to the bucket. Policies control users' ability to perform specific actions, such as listing the objects in the bucket. :param policy: The policy to apply to the bucket. """ try: self.bucket.Policy().put(Policy=json.dumps(policy)) logger.info("Put policy %s for bucket '%s'.", policy, self.bucket.name) except ClientError: logger.exception("Couldn't apply policy to bucket '%s'.", self.bucket.name) raise
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいPutBucketPolicy。
-
次のコード例は、Amazon S3 バケットから別のバケットにオブジェクトをコピーする方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class ObjectWrapper: """Encapsulates S3 object actions.""" def __init__(self, s3_object): """ :param s3_object: A Boto3 Object resource. This is a high-level resource in Boto3 that wraps object actions in a class-like structure. """ self.object = s3_object self.key = self.object.key def copy(self, dest_object): """ Copies the object to another bucket. :param dest_object: The destination object initialized with a bucket and key. This is a Boto3 Object resource. """ try: dest_object.copy_from(CopySource={ 'Bucket': self.object.bucket_name, 'Key': self.object.key }) dest_object.wait_until_exists() logger.info( "Copied object from %s:%s to %s:%s.", self.object.bucket_name, self.object.key, dest_object.bucket_name, dest_object.key) except ClientError: logger.exception( "Couldn't copy object from %s/%s to %s/%s.", self.object.bucket_name, self.object.key, dest_object.bucket_name, dest_object.key) raise
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいCopyObject。
-
次のコード例は、S3 バケットを作成する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 デフォルトの設定でバケットを作成します。
class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def create(self, region_override=None): """ Create an Amazon S3 bucket in the default Region for the account or in the specified Region. :param region_override: The Region in which to create the bucket. If this is not specified, the Region configured in your shared credentials is used. """ if region_override is not None: region = region_override else: region = self.bucket.meta.client.meta.region_name try: self.bucket.create( CreateBucketConfiguration={'LocationConstraint': region}) self.bucket.wait_until_exists() logger.info( "Created bucket '%s' in region=%s", self.bucket.name, region) except ClientError as error: logger.exception( "Couldn't create bucket named '%s' in region=%s.", self.bucket.name, region) raise error
ライフサイクル設定を使用してバージョン対応バケットを作成します。
def create_versioned_bucket(bucket_name, prefix): """ Creates an Amazon S3 bucket, enables it for versioning, and configures a lifecycle that expires noncurrent object versions after 7 days. Adding a lifecycle configuration to a versioned bucket is a best practice. It helps prevent objects in the bucket from accumulating a large number of noncurrent versions, which can slow down request performance. Usage is shown in the usage_demo_single_object function at the end of this module. :param bucket_name: The name of the bucket to create. :param prefix: Identifies which objects are automatically expired under the configured lifecycle rules. :return: The newly created bucket. """ try: bucket = s3.create_bucket( Bucket=bucket_name, CreateBucketConfiguration={ 'LocationConstraint': s3.meta.client.meta.region_name } ) logger.info("Created bucket %s.", bucket.name) except ClientError as error: if error.response['Error']['Code'] == 'BucketAlreadyOwnedByYou': logger.warning("Bucket %s already exists! Using it.", bucket_name) bucket = s3.Bucket(bucket_name) else: logger.exception("Couldn't create bucket %s.", bucket_name) raise try: bucket.Versioning().enable() logger.info("Enabled versioning on bucket %s.", bucket.name) except ClientError: logger.exception("Couldn't enable versioning on bucket %s.", bucket.name) raise try: expiration = 7 bucket.LifecycleConfiguration().put( LifecycleConfiguration={ 'Rules': [{ 'Status': 'Enabled', 'Prefix': prefix, 'NoncurrentVersionExpiration': {'NoncurrentDays': expiration} }] } ) logger.info("Configured lifecycle to expire noncurrent versions after %s days " "on bucket %s.", expiration, bucket.name) except ClientError as error: logger.warning("Couldn't configure lifecycle on bucket %s because %s. " "Continuing anyway.", bucket.name, error) return bucket
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいCreateBucket。
-
次のコード例は、S3 バケットから CORS ルールを削除する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def delete_cors(self): """ Delete the CORS rules from the bucket. :param bucket_name: The name of the bucket to update. """ try: self.bucket.Cors().delete() logger.info("Deleted CORS from bucket '%s'.", self.bucket.name) except ClientError: logger.exception("Couldn't delete CORS from bucket '%s'.", self.bucket.name) raise
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいDeleteBucketCors。
-
次のコード例は、S3 バケットからポリシーを削除する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def delete_policy(self): """ Delete the security policy from the bucket. """ try: self.bucket.Policy().delete() logger.info("Deleted policy for bucket '%s'.", self.bucket.name) except ClientError: logger.exception("Couldn't delete policy for bucket '%s'.", self.bucket.name) raise
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいDeleteBucketPolicy。
-
次のコード例は、空の S3 バケットを削除する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def delete(self): """ Delete the bucket. The bucket must be empty or an error is raised. """ try: self.bucket.delete() self.bucket.wait_until_not_exists() logger.info("Bucket %s successfully deleted.", self.bucket.name) except ClientError: logger.exception("Couldn't delete bucket %s.", self.bucket.name) raise
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいDeleteBucket。
-
次のコード例は、S3 オブジェクトを削除する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 オブジェクトを削除します。
class ObjectWrapper: """Encapsulates S3 object actions.""" def __init__(self, s3_object): """ :param s3_object: A Boto3 Object resource. This is a high-level resource in Boto3 that wraps object actions in a class-like structure. """ self.object = s3_object self.key = self.object.key def delete(self): """ Deletes the object. """ try: self.object.delete() self.object.wait_until_not_exists() logger.info( "Deleted object '%s' from bucket '%s'.", self.object.key, self.object.bucket_name) except ClientError: logger.exception( "Couldn't delete object '%s' from bucket '%s'.", self.object.key, self.object.bucket_name) raise
オブジェクトの新しいバージョンを削除して、オブジェクトを以前のバージョンにロールバックします。
def rollback_object(bucket, object_key, version_id): """ Rolls back an object to an earlier version by deleting all versions that occurred after the specified rollback version. Usage is shown in the usage_demo_single_object function at the end of this module. :param bucket: The bucket that holds the object to roll back. :param object_key: The object to roll back. :param version_id: The version ID to roll back to. """ # Versions must be sorted by last_modified date because delete markers are # at the end of the list even when they are interspersed in time. versions = sorted(bucket.object_versions.filter(Prefix=object_key), key=attrgetter('last_modified'), reverse=True) logger.debug( "Got versions:\n%s", '\n'.join([f"\t{version.version_id}, last modified {version.last_modified}" for version in versions])) if version_id in [ver.version_id for ver in versions]: print(f"Rolling back to version {version_id}") for version in versions: if version.version_id != version_id: version.delete() print(f"Deleted version {version.version_id}") else: break print(f"Active version is now {bucket.Object(object_key).version_id}") else: raise KeyError(f"{version_id} was not found in the list of versions for " f"{object_key}.")
オブジェクトのアクティブな削除マーカーを削除して、削除されたオブジェクトを復活させます。
def revive_object(bucket, object_key): """ Revives a versioned object that was deleted by removing the object's active delete marker. A versioned object presents as deleted when its latest version is a delete marker. By removing the delete marker, we make the previous version the latest version and the object then presents as *not* deleted. Usage is shown in the usage_demo_single_object function at the end of this module. :param bucket: The bucket that contains the object. :param object_key: The object to revive. """ # Get the latest version for the object. response = s3.meta.client.list_object_versions( Bucket=bucket.name, Prefix=object_key, MaxKeys=1) if 'DeleteMarkers' in response: latest_version = response['DeleteMarkers'][0] if latest_version['IsLatest']: logger.info("Object %s was indeed deleted on %s. Let's revive it.", object_key, latest_version['LastModified']) obj = bucket.Object(object_key) obj.Version(latest_version['VersionId']).delete() logger.info("Revived %s, active version is now %s with body '%s'", object_key, obj.version_id, obj.get()['Body'].read()) else: logger.warning("Delete marker is not the latest version for %s!", object_key) elif 'Versions' in response: logger.warning("Got an active version for %s, nothing to do.", object_key) else: logger.error("Couldn't get any version info for %s.", object_key)
S3 オブジェクトから削除マーカーを削除する Lambda ハンドラを作成します。このハンドラを使用して、バージョン管理されたバケット内の余分な削除マーカーを効率的にクリーンアップできます。
import logging from urllib import parse import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logger.setLevel('INFO') s3 = boto3.client('s3') def lambda_handler(event, context): """ Removes a delete marker from the specified versioned object. :param event: The S3 batch event that contains the ID of the delete marker to remove. :param context: Context about the event. :return: A result structure that Amazon S3 uses to interpret the result of the operation. When the result code is TemporaryFailure, S3 retries the operation. """ # Parse job parameters from Amazon S3 batch operations invocation_id = event['invocationId'] invocation_schema_version = event['invocationSchemaVersion'] results = [] result_code = None result_string = None task = event['tasks'][0] task_id = task['taskId'] try: obj_key = parse.unquote(task['s3Key'], encoding='utf-8') obj_version_id = task['s3VersionId'] bucket_name = task['s3BucketArn'].split(':')[-1] logger.info("Got task: remove delete marker %s from object %s.", obj_version_id, obj_key) try: # If this call does not raise an error, the object version is not a delete # marker and should not be deleted. response = s3.head_object( Bucket=bucket_name, Key=obj_key, VersionId=obj_version_id) result_code = 'PermanentFailure' result_string = f"Object {obj_key}, ID {obj_version_id} is not " \ f"a delete marker." logger.debug(response) logger.warning(result_string) except ClientError as error: delete_marker = error.response['ResponseMetadata']['HTTPHeaders'] \ .get('x-amz-delete-marker', 'false') if delete_marker == 'true': logger.info("Object %s, version %s is a delete marker.", obj_key, obj_version_id) try: s3.delete_object( Bucket=bucket_name, Key=obj_key, VersionId=obj_version_id) result_code = 'Succeeded' result_string = f"Successfully removed delete marker " \ f"{obj_version_id} from object {obj_key}." logger.info(result_string) except ClientError as error: # Mark request timeout as a temporary failure so it will be retried. if error.response['Error']['Code'] == 'RequestTimeout': result_code = 'TemporaryFailure' result_string = f"Attempt to remove delete marker from " \ f"object {obj_key} timed out." logger.info(result_string) else: raise else: raise ValueError(f"The x-amz-delete-marker header is either not " f"present or is not 'true'.") except Exception as error: # Mark all other exceptions as permanent failures. result_code = 'PermanentFailure' result_string = str(error) logger.exception(error) finally: results.append({ 'taskId': task_id, 'resultCode': result_code, 'resultString': result_string }) return { 'invocationSchemaVersion': invocation_schema_version, 'treatMissingKeysAs': 'PermanentFailure', 'invocationId': invocation_id, 'results': results }
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいDeleteObject。
-
次のコード例は、S3 バケットから複数のオブジェクトを削除する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 オブジェクト キーのリストからオブジェクトのセットを削除します。
class ObjectWrapper: """Encapsulates S3 object actions.""" def __init__(self, s3_object): """ :param s3_object: A Boto3 Object resource. This is a high-level resource in Boto3 that wraps object actions in a class-like structure. """ self.object = s3_object self.key = self.object.key @staticmethod def delete_objects(bucket, object_keys): """ Removes a list of objects from a bucket. This operation is done as a batch in a single request. :param bucket: The bucket that contains the objects. This is a Boto3 Bucket resource. :param object_keys: The list of keys that identify the objects to remove. :return: The response that contains data about which objects were deleted and any that could not be deleted. """ try: response = bucket.delete_objects(Delete={ 'Objects': [{ 'Key': key } for key in object_keys] }) if 'Deleted' in response: logger.info( "Deleted objects '%s' from bucket '%s'.", [del_obj['Key'] for del_obj in response['Deleted']], bucket.name) if 'Errors' in response: logger.warning( "Could not delete objects '%s' from bucket '%s'.", [ f"{del_obj['Key']}: {del_obj['Code']}" for del_obj in response['Errors']], bucket.name) except ClientError: logger.exception("Couldn't delete any objects from bucket %s.", bucket.name) raise else: return response
バケット内のすべてのオブジェクトを削除します。
class ObjectWrapper: """Encapsulates S3 object actions.""" def __init__(self, s3_object): """ :param s3_object: A Boto3 Object resource. This is a high-level resource in Boto3 that wraps object actions in a class-like structure. """ self.object = s3_object self.key = self.object.key @staticmethod def empty_bucket(bucket): """ Remove all objects from a bucket. :param bucket: The bucket to empty. This is a Boto3 Bucket resource. """ try: bucket.objects.delete() logger.info("Emptied bucket '%s'.", bucket.name) except ClientError: logger.exception("Couldn't empty bucket '%s'.", bucket.name) raise
すべてのバージョンを削除することによって、バージョン管理されているオブジェクトを完全に削除します。
def permanently_delete_object(bucket, object_key): """ Permanently deletes a versioned object by deleting all of its versions. Usage is shown in the usage_demo_single_object function at the end of this module. :param bucket: The bucket that contains the object. :param object_key: The object to delete. """ try: bucket.object_versions.filter(Prefix=object_key).delete() logger.info("Permanently deleted all versions of object %s.", object_key) except ClientError: logger.exception("Couldn't delete all versions of %s.", object_key) raise
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいDeleteObjects。
-
次のコード例は、S3 バケットのライフサイクル設定を削除する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def delete_lifecycle_configuration(self): """ Remove the lifecycle configuration from the specified bucket. """ try: self.bucket.LifecycleConfiguration().delete() logger.info( "Deleted lifecycle configuration for bucket '%s'.", self.bucket.name) except ClientError: logger.exception( "Couldn't delete lifecycle configuration for bucket '%s'.", self.bucket.name) raise
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいDeleteBucketLifecycle。
-
次のコード例は、S3 バケットの存在を調べる方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def exists(self): """ Determine whether the bucket exists and you have access to it. :return: True when the bucket exists; otherwise, False. """ try: self.bucket.meta.client.head_bucket(Bucket=self.bucket.name) logger.info("Bucket %s exists.", self.bucket.name) exists = True except ClientError: logger.warning( "Bucket %s doesn't exist or you don't have access to it.", self.bucket.name) exists = False return exists
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいHeadBucket。
-
次のコード例は、S3 バケットの Cross-Origin Resource Sharing (CORS) ルールを取得する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def get_cors(self): """ Get the CORS rules for the bucket. :return The CORS rules for the specified bucket. """ try: cors = self.bucket.Cors() logger.info( "Got CORS rules %s for bucket '%s'.", cors.cors_rules, self.bucket.name) except ClientError: logger.exception(("Couldn't get CORS for bucket %s.", self.bucket.name)) raise else: return cors
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいGetBucketCors。
-
次のコード例は、S3 バケット内のオブジェクトからデータを読み取る方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class ObjectWrapper: """Encapsulates S3 object actions.""" def __init__(self, s3_object): """ :param s3_object: A Boto3 Object resource. This is a high-level resource in Boto3 that wraps object actions in a class-like structure. """ self.object = s3_object self.key = self.object.key def get(self): """ Gets the object. :return: The object data in bytes. """ try: body = self.object.get()['Body'].read() logger.info( "Got object '%s' from bucket '%s'.", self.object.key, self.object.bucket_name) except ClientError: logger.exception( "Couldn't get object '%s' from bucket '%s'.", self.object.key, self.object.bucket_name) raise else: return body
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいGetObject。
-
次のコード例は、S3 バケットのアクセスコントロールリスト (ACL) を取得する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def get_acl(self): """ Get the ACL of the bucket. :return: The ACL of the bucket. """ try: acl = self.bucket.Acl() logger.info( "Got ACL for bucket %s. Owner is %s.", self.bucket.name, acl.owner) except ClientError: logger.exception("Couldn't get ACL for bucket %s.", self.bucket.name) raise else: return acl
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいGetBucketAcl。
-
次のコード例は、S3 オブジェクトのアクセスコントロールリスト (ACL) を取得する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class ObjectWrapper: """Encapsulates S3 object actions.""" def __init__(self, s3_object): """ :param s3_object: A Boto3 Object resource. This is a high-level resource in Boto3 that wraps object actions in a class-like structure. """ self.object = s3_object self.key = self.object.key def get_acl(self): """ Gets the ACL of the object. :return: The ACL of the object. """ try: acl = self.object.Acl() logger.info( "Got ACL for object %s owned by %s.", self.object.key, acl.owner['DisplayName']) except ClientError: logger.exception("Couldn't get ACL for object %s.", self.object.key) raise else: return acl
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいGetObjectAcl。
-
次のコード例は、S3 バケットのライフサイクル設定を取得する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def get_lifecycle_configuration(self): """ Get the lifecycle configuration of the bucket. :return: The lifecycle rules of the specified bucket. """ try: config = self.bucket.LifecycleConfiguration() logger.info( "Got lifecycle rules %s for bucket '%s'.", config.rules, self.bucket.name) except: logger.exception( "Couldn't get lifecycle rules for bucket '%s'.", self.bucket.name) raise else: return config.rules
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいGetBucketLifecycleConfiguration。
-
次のコード例は、S3 バケットのポリシーを取得する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def get_policy(self): """ Get the security policy of the bucket. :return: The security policy of the specified bucket, in JSON format. """ try: policy = self.bucket.Policy() logger.info("Got policy %s for bucket '%s'.", policy.policy, self.bucket.name) except ClientError: logger.exception("Couldn't get policy for bucket '%s'.", self.bucket.name) raise else: return json.loads(policy.policy)
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいGetBucketPolicy。
-
次のコード例は、S3 バケットを一覧表示する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name @staticmethod def list(s3_resource): """ Get the buckets in all Regions for the current account. :param s3_resource: A Boto3 S3 resource. This is a high-level resource in Boto3 that contains collections and factory methods to create other high-level S3 sub-resources. :return: The list of buckets. """ try: buckets = list(s3_resource.buckets.all()) logger.info("Got buckets: %s.", buckets) except ClientError: logger.exception("Couldn't get buckets.") raise else: return buckets
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいListBuckets。
-
次のコード例は、S3 バケット内のオブジェクトを一覧表示する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class ObjectWrapper: """Encapsulates S3 object actions.""" def __init__(self, s3_object): """ :param s3_object: A Boto3 Object resource. This is a high-level resource in Boto3 that wraps object actions in a class-like structure. """ self.object = s3_object self.key = self.object.key @staticmethod def list(bucket, prefix=None): """ Lists the objects in a bucket, optionally filtered by a prefix. :param bucket: The bucket to query. This is a Boto3 Bucket resource. :param prefix: When specified, only objects that start with this prefix are listed. :return: The list of objects. """ try: if not prefix: objects = list(bucket.objects.all()) else: objects = list(bucket.objects.filter(Prefix=prefix)) logger.info("Got objects %s from bucket '%s'", [o.key for o in objects], bucket.name) except ClientError: logger.exception("Couldn't get objects for bucket '%s'.", bucket.name) raise else: return objects
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいListObjects。
-
次のコード例は、S3 バケットの新しいアクセスコントロールリスト (ACL) を設定する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def grant_log_delivery_access(self): """ Grant the AWS Log Delivery group write access to the bucket so that Amazon S3 can deliver access logs to the bucket. This is the only recommended use of an S3 bucket ACL. """ try: acl = self.bucket.Acl() # Putting an ACL overwrites the existing ACL. If you want to preserve # existing grants, append new grants to the list of existing grants. grants = acl.grants if acl.grants else [] grants.append({ 'Grantee': { 'Type': 'Group', 'URI': 'http://acs.amazonaws.com/groups/s3/LogDelivery' }, 'Permission': 'WRITE' }) acl.put( AccessControlPolicy={ 'Grants': grants, 'Owner': acl.owner } ) logger.info("Granted log delivery access to bucket '%s'", self.bucket.name) except ClientError: logger.exception("Couldn't add ACL to bucket '%s'.", self.bucket.name) raise
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいPutBucketAcl。
-
次のコード例は、S3 オブジェクトのアクセスコントロールリスト (ACL) を設定する方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class ObjectWrapper: """Encapsulates S3 object actions.""" def __init__(self, s3_object): """ :param s3_object: A Boto3 Object resource. This is a high-level resource in Boto3 that wraps object actions in a class-like structure. """ self.object = s3_object self.key = self.object.key def put_acl(self, email): """ Applies an ACL to the object that grants read access to an AWS user identified by email address. :param email: The email address of the user to grant access. """ try: acl = self.object.Acl() # Putting an ACL overwrites the existing ACL, so append new grants # if you want to preserve existing grants. grants = acl.grants if acl.grants else [] grants.append({ 'Grantee': { 'Type': 'AmazonCustomerByEmail', 'EmailAddress': email }, 'Permission': 'READ' }) acl.put( AccessControlPolicy={ 'Grants': grants, 'Owner': acl.owner } ) logger.info("Granted read access to %s.", email) except ClientError: logger.exception("Couldn't add ACL to object '%s'.", self.object.key) raise
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいPutObjectAcl。
-
次のコード例は、S3 バケットにオブジェクトをアップロードする方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 class ObjectWrapper: """Encapsulates S3 object actions.""" def __init__(self, s3_object): """ :param s3_object: A Boto3 Object resource. This is a high-level resource in Boto3 that wraps object actions in a class-like structure. """ self.object = s3_object self.key = self.object.key def put(self, data): """ Upload data to the object. :param data: The data to upload. This can either be bytes or a string. When this argument is a string, it is interpreted as a file name, which is opened in read bytes mode. """ put_data = data if isinstance(data, str): try: put_data = open(data, 'rb') except IOError: logger.exception("Expected file name or binary data, got '%s'.", data) raise try: self.object.put(Body=put_data) self.object.wait_until_exists() logger.info( "Put object '%s' to bucket '%s'.", self.object.key, self.object.bucket_name) except ClientError: logger.exception( "Couldn't put object '%s' to bucket '%s'.", self.object.key, self.object.bucket_name) raise finally: if getattr(put_data, 'close', None): put_data.close()
-
API の詳細については、「AWSSDK for Python (Boto3) API リファレンス」を参照してくださいPutObject。
-
シナリオ
次のコード例は、Amazon S3 の署名付き URL を作成し、オブジェクトをアップロードする方法を示しています。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 S3 アクションを期間限定で実行できる署名付き URL を生成します。Requests パッケージを使用して、URL でリクエストを行います。
import argparse import logging import boto3 from botocore.exceptions import ClientError import requests logger = logging.getLogger(__name__) def generate_presigned_url(s3_client, client_method, method_parameters, expires_in): """ Generate a presigned Amazon S3 URL that can be used to perform an action. :param s3_client: A Boto3 Amazon S3 client. :param client_method: The name of the client method that the URL performs. :param method_parameters: The parameters of the specified client method. :param expires_in: The number of seconds the presigned URL is valid for. :return: The presigned URL. """ try: url = s3_client.generate_presigned_url( ClientMethod=client_method, Params=method_parameters, ExpiresIn=expires_in ) logger.info("Got presigned URL: %s", url) except ClientError: logger.exception( "Couldn't get a presigned URL for client method '%s'.", client_method) raise return url def usage_demo(): logging.basicConfig(level=logging.INFO, format='%(levelname)s: %(message)s') print('-'*88) print("Welcome to the Amazon S3 presigned URL demo.") print('-'*88) parser = argparse.ArgumentParser() parser.add_argument('bucket', help="The name of the bucket.") parser.add_argument( 'key', help="For a GET operation, the key of the object in Amazon S3. For a " "PUT operation, the name of a file to upload.") parser.add_argument( 'action', choices=('get', 'put'), help="The action to perform.") args = parser.parse_args() s3_client = boto3.client('s3') client_action = 'get_object' if args.action == 'get' else 'put_object' url = generate_presigned_url( s3_client, client_action, {'Bucket': args.bucket, 'Key': args.key}, 1000) print("Using the Requests package to send a request to the URL.") response = None if args.action == 'get': response = requests.get(url) elif args.action == 'put': print("Putting data to the URL.") try: with open(args.key, 'r') as object_file: object_text = object_file.read() response = requests.put(url, data=object_text) except FileNotFoundError: print(f"Couldn't find {args.key}. For a PUT operation, the key must be the " f"name of a file that exists on your computer.") if response is not None: print("Got response:") print(f"Status: {response.status_code}") print(response.text) print('-'*88) if __name__ == '__main__': usage_demo()
署名付き POST リクエストを生成して、ファイルをアップロードします。
class BucketWrapper: """Encapsulates S3 bucket actions.""" def __init__(self, bucket): """ :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3 that wraps bucket actions in a class-like structure. """ self.bucket = bucket self.name = bucket.name def generate_presigned_post(self, object_key, expires_in): """ Generate a presigned Amazon S3 POST request to upload a file. A presigned POST can be used for a limited time to let someone without an AWS account upload a file to a bucket. :param object_key: The object key to identify the uploaded object. :param expires_in: The number of seconds the presigned POST is valid. :return: A dictionary that contains the URL and form fields that contain required access data. """ try: response = self.bucket.meta.client.generate_presigned_post( Bucket=self.bucket.name, Key=object_key, ExpiresIn=expires_in) logger.info("Got presigned POST URL: %s", response['url']) except ClientError: logger.exception( "Couldn't get a presigned POST URL for bucket '%s' and object '%s'", self.bucket.name, object_key) raise return response
次のコード例は、以下の操作方法を示しています。
バケットを作成し、そこにファイルをアップロードします。
バケットからオブジェクトをダウンロードします。
バケット内のサブフォルダにオブジェクトをコピーします。
バケット内のオブジェクトを一覧表示します。
バケットオブジェクトとバケットを削除します。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 import io import os import uuid import boto3 from boto3.s3.transfer import S3UploadFailedError from botocore.exceptions import ClientError def do_scenario(s3_resource): print('-'*88) print("Welcome to the Amazon S3 getting started demo!") print('-'*88) bucket_name = f'doc-example-bucket-{uuid.uuid4()}' bucket = s3_resource.Bucket(bucket_name) try: bucket.create( CreateBucketConfiguration={ 'LocationConstraint': s3_resource.meta.client.meta.region_name}) print(f"Created demo bucket named {bucket.name}.") except ClientError as err: print(f"Tried and failed to create demo bucket {bucket_name}.") print(f"\t{err.response['Error']['Code']}:{err.response['Error']['Message']}") print(f"\nCan't continue the demo without a bucket!") return file_name = None while file_name is None: file_name = input("\nEnter a file you want to upload to your bucket: ") if not os.path.exists(file_name): print(f"Couldn't find file {file_name}. Are you sure it exists?") file_name = None obj = bucket.Object(os.path.basename(file_name)) try: obj.upload_file(file_name) print(f"Uploaded file {file_name} into bucket {bucket.name} with key {obj.key}.") except S3UploadFailedError as err: print(f"Couldn't upload file {file_name} to {bucket.name}.") print(f"\t{err}") answer = input(f"\nDo you want to download {obj.key} into memory (y/n)? ") if answer.lower() == 'y': data = io.BytesIO() try: obj.download_fileobj(data) data.seek(0) print(f"Got your object. Here are the first 20 bytes:\n") print(f"\t{data.read(20)}") except ClientError as err: print(f"Couldn't download {obj.key}.") print(f"\t{err.response['Error']['Code']}:{err.response['Error']['Message']}") answer = input( f"\nDo you want to copy {obj.key} to a subfolder in your bucket (y/n)? ") if answer.lower() == 'y': dest_obj = bucket.Object(f'demo-folder/{obj.key}') try: dest_obj.copy({'Bucket': bucket.name, 'Key': obj.key}) print(f"Copied {obj.key} to {dest_obj.key}.") except ClientError as err: print(f"Couldn't copy {obj.key} to {dest_obj.key}.") print(f"\t{err.response['Error']['Code']}:{err.response['Error']['Message']}") print("\nYour bucket contains the following objects:") try: for o in bucket.objects.all(): print(f"\t{o.key}") except ClientError as err: print(f"Couldn't list the objects in bucket {bucket.name}.") print(f"\t{err.response['Error']['Code']}:{err.response['Error']['Message']}") answer = input( "\nDo you want to delete all of the objects as well as the bucket (y/n)? ") if answer.lower() == 'y': try: bucket.objects.delete() bucket.delete() print(f"Emptied and deleted bucket {bucket.name}.\n") except ClientError as err: print(f"Couldn't empty and delete bucket {bucket.name}.") print(f"\t{err.response['Error']['Code']}:{err.response['Error']['Message']}") print("Thanks for watching!") print('-'*88) if __name__ == '__main__': do_scenario(boto3.resource('s3'))
-
API の詳細については、「AWS SDK for Python (Boto3) API リファレンス」の以下のトピックを参照してください。
-
次のコード例は、バージョン管理されている S3 オブジェクトを Lambda 関数でバッチで管理する方法を示しています。
- SDK for Python (Boto3)
-
AWS Lambda 関数を呼び出して処理を実行するジョブを作成することによって、Amazon Simple Storage Service (Amazon S3) のバージョン管理されているオブジェクトをバッチで操作する方法を示しています。この例では、バージョン対応のバケットを作成し、Lewis Carroll の詩「You Are Old, Father William」からスタンザをアップロードし、Amazon S3 のバッチジョブを使用して、さまざまな方法で詩をひねります。
以下ではその方法を説明しています。
バージョン管理されたオブジェクトを操作する Lambda 関数を作成します。
更新するオブジェクトのマニフェストを作成します。
Lambda 関数を呼び出してオブジェクトを更新するバッチジョブを作成します。
Lambda 関数を削除します。
バージョン管理されているバケットを空にして削除します。
この例は、最もよく確認できますGitHub。完全なソースコードとセットアップおよび実行の手順については、「」の詳しい例を参照してくださいGitHub
。 この例で使用されているサービス
Amazon S3
次のコード例は、Amazon S3 との間で大きなファイルをアップロードまたはダウンロードする方法を示しています。
詳細については、「マルチパートアップロードを使用したオブジェクトのアップロード」を参照してください。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 使用可能な転送マネージャの設定を使用して、ファイルを転送する関数を作成します。コールバッククラスを使用して、ファイル転送中にコールバックの進行状況を書き込みます。
import sys import threading import boto3 from boto3.s3.transfer import TransferConfig MB = 1024 * 1024 s3 = boto3.resource('s3') class TransferCallback: """ Handle callbacks from the transfer manager. The transfer manager periodically calls the __call__ method throughout the upload and download process so that it can take action, such as displaying progress to the user and collecting data about the transfer. """ def __init__(self, target_size): self._target_size = target_size self._total_transferred = 0 self._lock = threading.Lock() self.thread_info = {} def __call__(self, bytes_transferred): """ The callback method that is called by the transfer manager. Display progress during file transfer and collect per-thread transfer data. This method can be called by multiple threads, so shared instance data is protected by a thread lock. """ thread = threading.current_thread() with self._lock: self._total_transferred += bytes_transferred if thread.ident not in self.thread_info.keys(): self.thread_info[thread.ident] = bytes_transferred else: self.thread_info[thread.ident] += bytes_transferred target = self._target_size * MB sys.stdout.write( f"\r{self._total_transferred} of {target} transferred " f"({(self._total_transferred / target) * 100:.2f}%).") sys.stdout.flush() def upload_with_default_configuration(local_file_path, bucket_name, object_key, file_size_mb): """ Upload a file from a local folder to an Amazon S3 bucket, using the default configuration. """ transfer_callback = TransferCallback(file_size_mb) s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Callback=transfer_callback) return transfer_callback.thread_info def upload_with_chunksize_and_meta(local_file_path, bucket_name, object_key, file_size_mb, metadata=None): """ Upload a file from a local folder to an Amazon S3 bucket, setting a multipart chunk size and adding metadata to the Amazon S3 object. The multipart chunk size controls the size of the chunks of data that are sent in the request. A smaller chunk size typically results in the transfer manager using more threads for the upload. The metadata is a set of key-value pairs that are stored with the object in Amazon S3. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_chunksize=1 * MB) extra_args = {'Metadata': metadata} if metadata else None s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Config=config, ExtraArgs=extra_args, Callback=transfer_callback) return transfer_callback.thread_info def upload_with_high_threshold(local_file_path, bucket_name, object_key, file_size_mb): """ Upload a file from a local folder to an Amazon S3 bucket, setting a multipart threshold larger than the size of the file. Setting a multipart threshold larger than the size of the file results in the transfer manager sending the file as a standard upload instead of a multipart upload. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB) s3.Bucket(bucket_name).upload_file( local_file_path, object_key, Config=config, Callback=transfer_callback) return transfer_callback.thread_info def upload_with_sse(local_file_path, bucket_name, object_key, file_size_mb, sse_key=None): """ Upload a file from a local folder to an Amazon S3 bucket, adding server-side encryption with customer-provided encryption keys to the object. When this kind of encryption is specified, Amazon S3 encrypts the object at rest and allows downloads only when the expected encryption key is provided in the download request. """ transfer_callback = TransferCallback(file_size_mb) if sse_key: extra_args = { 'SSECustomerAlgorithm': 'AES256', 'SSECustomerKey': sse_key} else: extra_args = None s3.Bucket(bucket_name).upload_file( local_file_path, object_key, ExtraArgs=extra_args, Callback=transfer_callback) return transfer_callback.thread_info def download_with_default_configuration(bucket_name, object_key, download_file_path, file_size_mb): """ Download a file from an Amazon S3 bucket to a local folder, using the default configuration. """ transfer_callback = TransferCallback(file_size_mb) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Callback=transfer_callback) return transfer_callback.thread_info def download_with_single_thread(bucket_name, object_key, download_file_path, file_size_mb): """ Download a file from an Amazon S3 bucket to a local folder, using a single thread. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(use_threads=False) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Config=config, Callback=transfer_callback) return transfer_callback.thread_info def download_with_high_threshold(bucket_name, object_key, download_file_path, file_size_mb): """ Download a file from an Amazon S3 bucket to a local folder, setting a multipart threshold larger than the size of the file. Setting a multipart threshold larger than the size of the file results in the transfer manager sending the file as a standard download instead of a multipart download. """ transfer_callback = TransferCallback(file_size_mb) config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB) s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, Config=config, Callback=transfer_callback) return transfer_callback.thread_info def download_with_sse(bucket_name, object_key, download_file_path, file_size_mb, sse_key): """ Download a file from an Amazon S3 bucket to a local folder, adding a customer-provided encryption key to the request. When this kind of encryption is specified, Amazon S3 encrypts the object at rest and allows downloads only when the expected encryption key is provided in the download request. """ transfer_callback = TransferCallback(file_size_mb) if sse_key: extra_args = { 'SSECustomerAlgorithm': 'AES256', 'SSECustomerKey': sse_key} else: extra_args = None s3.Bucket(bucket_name).Object(object_key).download_file( download_file_path, ExtraArgs=extra_args, Callback=transfer_callback) return transfer_callback.thread_info
転送マネージャの機能とレポート結果のデモンストレーションを行います。
import hashlib import os import platform import shutil import time import boto3 from boto3.s3.transfer import TransferConfig from botocore.exceptions import ClientError from botocore.exceptions import ParamValidationError from botocore.exceptions import NoCredentialsError import file_transfer MB = 1024 * 1024 # These configuration attributes affect both uploads and downloads. CONFIG_ATTRS = ('multipart_threshold', 'multipart_chunksize', 'max_concurrency', 'use_threads') # These configuration attributes affect only downloads. DOWNLOAD_CONFIG_ATTRS = ('max_io_queue', 'io_chunksize', 'num_download_attempts') class TransferDemoManager: """ Manages the demonstration. Collects user input from a command line, reports transfer results, maintains a list of artifacts created during the demonstration, and cleans them up after the demonstration is completed. """ def __init__(self): self._s3 = boto3.resource('s3') self._chore_list = [] self._create_file_cmd = None self._size_multiplier = 0 self.file_size_mb = 30 self.demo_folder = None self.demo_bucket = None self._setup_platform_specific() self._terminal_width = shutil.get_terminal_size(fallback=(80, 80))[0] def collect_user_info(self): """ Collect local folder and Amazon S3 bucket name from the user. These locations are used to store files during the demonstration. """ while not self.demo_folder: self.demo_folder = input( "Which file folder do you want to use to store " "demonstration files? ") if not os.path.isdir(self.demo_folder): print(f"{self.demo_folder} isn't a folder!") self.demo_folder = None while not self.demo_bucket: self.demo_bucket = input( "Which Amazon S3 bucket do you want to use to store " "demonstration files? ") try: self._s3.meta.client.head_bucket(Bucket=self.demo_bucket) except ParamValidationError as err: print(err) self.demo_bucket = None except ClientError as err: print(err) print( f"Either {self.demo_bucket} doesn't exist or you don't " f"have access to it.") self.demo_bucket = None def demo(self, question, upload_func, download_func, upload_args=None, download_args=None): """Run a demonstration. Ask the user if they want to run this specific demonstration. If they say yes, create a file on the local path, upload it using the specified upload function, then download it using the specified download function. """ if download_args is None: download_args = {} if upload_args is None: upload_args = {} question = question.format(self.file_size_mb) answer = input(f"{question} (y/n)") if answer.lower() == 'y': local_file_path, object_key, download_file_path = \ self._create_demo_file() file_transfer.TransferConfig = \ self._config_wrapper(TransferConfig, CONFIG_ATTRS) self._report_transfer_params('Uploading', local_file_path, object_key, **upload_args) start_time = time.perf_counter() thread_info = upload_func(local_file_path, self.demo_bucket, object_key, self.file_size_mb, **upload_args) end_time = time.perf_counter() self._report_transfer_result(thread_info, end_time - start_time) file_transfer.TransferConfig = \ self._config_wrapper(TransferConfig, CONFIG_ATTRS + DOWNLOAD_CONFIG_ATTRS) self._report_transfer_params('Downloading', object_key, download_file_path, **download_args) start_time = time.perf_counter() thread_info = download_func(self.demo_bucket, object_key, download_file_path, self.file_size_mb, **download_args) end_time = time.perf_counter() self._report_transfer_result(thread_info, end_time - start_time) def last_name_set(self): """Get the name set used for the last demo.""" return self._chore_list[-1] def cleanup(self): """ Remove files from the demo folder, and uploaded objects from the Amazon S3 bucket. """ print('-' * self._terminal_width) for local_file_path, s3_object_key, downloaded_file_path \ in self._chore_list: print(f"Removing {local_file_path}") try: os.remove(local_file_path) except FileNotFoundError as err: print(err) print(f"Removing {downloaded_file_path}") try: os.remove(downloaded_file_path) except FileNotFoundError as err: print(err) if self.demo_bucket: print(f"Removing {self.demo_bucket}:{s3_object_key}") try: self._s3.Bucket(self.demo_bucket).Object( s3_object_key).delete() except ClientError as err: print(err) def _setup_platform_specific(self): """Set up platform-specific command used to create a large file.""" if platform.system() == "Windows": self._create_file_cmd = "fsutil file createnew {} {}" self._size_multiplier = MB elif platform.system() == "Linux" or platform.system() == "Darwin": self._create_file_cmd = f"dd if=/dev/urandom of={{}} " \ f"bs={MB} count={{}}" self._size_multiplier = 1 else: raise EnvironmentError( f"Demo of platform {platform.system()} isn't supported.") def _create_demo_file(self): """ Create a file in the demo folder specified by the user. Store the local path, object name, and download path for later cleanup. Only the local file is created by this method. The Amazon S3 object and download file are created later during the demonstration. Returns: A tuple that contains the local file path, object name, and download file path. """ file_name_template = "TestFile{}-{}.demo" local_suffix = "local" object_suffix = "s3object" download_suffix = "downloaded" file_tag = len(self._chore_list) + 1 local_file_path = os.path.join( self.demo_folder, file_name_template.format(file_tag, local_suffix)) s3_object_key = file_name_template.format(file_tag, object_suffix) downloaded_file_path = os.path.join( self.demo_folder, file_name_template.format(file_tag, download_suffix)) filled_cmd = self._create_file_cmd.format( local_file_path, self.file_size_mb * self._size_multiplier) print(f"Creating file of size {self.file_size_mb} MB " f"in {self.demo_folder} by running:") print(f"{'':4}{filled_cmd}") os.system(filled_cmd) chore = (local_file_path, s3_object_key, downloaded_file_path) self._chore_list.append(chore) return chore def _report_transfer_params(self, verb, source_name, dest_name, **kwargs): """Report configuration and extra arguments used for a file transfer.""" print('-' * self._terminal_width) print(f'{verb} {source_name} ({self.file_size_mb} MB) to {dest_name}') if kwargs: print('With extra args:') for arg, value in kwargs.items(): print(f'{"":4}{arg:<20}: {value}') @staticmethod def ask_user(question): """ Ask the user a yes or no question. Returns: True when the user answers 'y' or 'Y'; otherwise, False. """ answer = input(f"{question} (y/n) ") return answer.lower() == 'y' @staticmethod def _config_wrapper(func, config_attrs): def wrapper(*args, **kwargs): config = func(*args, **kwargs) print('With configuration:') for attr in config_attrs: print(f'{"":4}{attr:<20}: {getattr(config, attr)}') return config return wrapper @staticmethod def _report_transfer_result(thread_info, elapsed): """Report the result of a transfer, including per-thread data.""" print(f"\nUsed {len(thread_info)} threads.") for ident, byte_count in thread_info.items(): print(f"{'':4}Thread {ident} copied {byte_count} bytes.") print(f"Your transfer took {elapsed:.2f} seconds.") def main(): """ Run the demonstration script for s3_file_transfer. """ demo_manager = TransferDemoManager() demo_manager.collect_user_info() # Upload and download with default configuration. Because the file is 30 MB # and the default multipart_threshold is 8 MB, both upload and download are # multipart transfers. demo_manager.demo( "Do you want to upload and download a {} MB file " "using the default configuration?", file_transfer.upload_with_default_configuration, file_transfer.download_with_default_configuration) # Upload and download with multipart_threshold set higher than the size of # the file. This causes the transfer manager to use standard transfers # instead of multipart transfers. demo_manager.demo( "Do you want to upload and download a {} MB file " "as a standard (not multipart) transfer?", file_transfer.upload_with_high_threshold, file_transfer.download_with_high_threshold) # Upload with specific chunk size and additional metadata. # Download with a single thread. demo_manager.demo( "Do you want to upload a {} MB file with a smaller chunk size and " "then download the same file using a single thread?", file_transfer.upload_with_chunksize_and_meta, file_transfer.download_with_single_thread, upload_args={ 'metadata': { 'upload_type': 'chunky', 'favorite_color': 'aqua', 'size': 'medium'}}) # Upload using server-side encryption with customer-provided # encryption keys. # Generate a 256-bit key from a passphrase. sse_key = hashlib.sha256('demo_passphrase'.encode('utf-8')).digest() demo_manager.demo( "Do you want to upload and download a {} MB file using " "server-side encryption?", file_transfer.upload_with_sse, file_transfer.download_with_sse, upload_args={'sse_key': sse_key}, download_args={'sse_key': sse_key}) # Download without specifying an encryption key to show that the # encryption key must be included to download an encrypted object. if demo_manager.ask_user("Do you want to try to download the encrypted " "object without sending the required key?"): try: _, object_key, download_file_path = \ demo_manager.last_name_set() file_transfer.download_with_default_configuration( demo_manager.demo_bucket, object_key, download_file_path, demo_manager.file_size_mb) except ClientError as err: print("Got expected error when trying to download an encrypted " "object without specifying encryption info:") print(f"{'':4}{err}") # Remove all created and downloaded files, remove all objects from # S3 storage. if demo_manager.ask_user( "Demonstration complete. Do you want to remove local files " "and S3 objects?"): demo_manager.cleanup() if __name__ == '__main__': try: main() except NoCredentialsError as error: print(error) print("To run this example, you must have valid credentials in " "a shared credential file or set in environment variables.")
次のコード例は、以下の操作方法を示しています。
バージョニングされた S3 バケットを作成します。
オブジェクトのすべてのバージョンを取得します。
オブジェクトを以前のバージョンにロールバックします。
バージョニングされたオブジェクトを削除して復元します。
オブジェクトのバージョンを完全に削除します。
- SDK for Python (Boto3)
-
注記
他にもありますGitHub。用例一覧を検索し、AWS コード例リポジトリ
での設定と実行の方法を確認してください。 SS3 アクションをラップする関数を作成します。
def create_versioned_bucket(bucket_name, prefix): """ Creates an Amazon S3 bucket, enables it for versioning, and configures a lifecycle that expires noncurrent object versions after 7 days. Adding a lifecycle configuration to a versioned bucket is a best practice. It helps prevent objects in the bucket from accumulating a large number of noncurrent versions, which can slow down request performance. Usage is shown in the usage_demo_single_object function at the end of this module. :param bucket_name: The name of the bucket to create. :param prefix: Identifies which objects are automatically expired under the configured lifecycle rules. :return: The newly created bucket. """ try: bucket = s3.create_bucket( Bucket=bucket_name, CreateBucketConfiguration={ 'LocationConstraint': s3.meta.client.meta.region_name } ) logger.info("Created bucket %s.", bucket.name) except ClientError as error: if error.response['Error']['Code'] == 'BucketAlreadyOwnedByYou': logger.warning("Bucket %s already exists! Using it.", bucket_name) bucket = s3.Bucket(bucket_name) else: logger.exception("Couldn't create bucket %s.", bucket_name) raise try: bucket.Versioning().enable() logger.info("Enabled versioning on bucket %s.", bucket.name) except ClientError: logger.exception("Couldn't enable versioning on bucket %s.", bucket.name) raise try: expiration = 7 bucket.LifecycleConfiguration().put( LifecycleConfiguration={ 'Rules': [{ 'Status': 'Enabled', 'Prefix': prefix, 'NoncurrentVersionExpiration': {'NoncurrentDays': expiration} }] } ) logger.info("Configured lifecycle to expire noncurrent versions after %s days " "on bucket %s.", expiration, bucket.name) except ClientError as error: logger.warning("Couldn't configure lifecycle on bucket %s because %s. " "Continuing anyway.", bucket.name, error) return bucket def rollback_object(bucket, object_key, version_id): """ Rolls back an object to an earlier version by deleting all versions that occurred after the specified rollback version. Usage is shown in the usage_demo_single_object function at the end of this module. :param bucket: The bucket that holds the object to roll back. :param object_key: The object to roll back. :param version_id: The version ID to roll back to. """ # Versions must be sorted by last_modified date because delete markers are # at the end of the list even when they are interspersed in time. versions = sorted(bucket.object_versions.filter(Prefix=object_key), key=attrgetter('last_modified'), reverse=True) logger.debug( "Got versions:\n%s", '\n'.join([f"\t{version.version_id}, last modified {version.last_modified}" for version in versions])) if version_id in [ver.version_id for ver in versions]: print(f"Rolling back to version {version_id}") for version in versions: if version.version_id != version_id: version.delete() print(f"Deleted version {version.version_id}") else: break print(f"Active version is now {bucket.Object(object_key).version_id}") else: raise KeyError(f"{version_id} was not found in the list of versions for " f"{object_key}.") def revive_object(bucket, object_key): """ Revives a versioned object that was deleted by removing the object's active delete marker. A versioned object presents as deleted when its latest version is a delete marker. By removing the delete marker, we make the previous version the latest version and the object then presents as *not* deleted. Usage is shown in the usage_demo_single_object function at the end of this module. :param bucket: The bucket that contains the object. :param object_key: The object to revive. """ # Get the latest version for the object. response = s3.meta.client.list_object_versions( Bucket=bucket.name, Prefix=object_key, MaxKeys=1) if 'DeleteMarkers' in response: latest_version = response['DeleteMarkers'][0] if latest_version['IsLatest']: logger.info("Object %s was indeed deleted on %s. Let's revive it.", object_key, latest_version['LastModified']) obj = bucket.Object(object_key) obj.Version(latest_version['VersionId']).delete() logger.info("Revived %s, active version is now %s with body '%s'", object_key, obj.version_id, obj.get()['Body'].read()) else: logger.warning("Delete marker is not the latest version for %s!", object_key) elif 'Versions' in response: logger.warning("Got an active version for %s, nothing to do.", object_key) else: logger.error("Couldn't get any version info for %s.", object_key) def permanently_delete_object(bucket, object_key): """ Permanently deletes a versioned object by deleting all of its versions. Usage is shown in the usage_demo_single_object function at the end of this module. :param bucket: The bucket that contains the object. :param object_key: The object to delete. """ try: bucket.object_versions.filter(Prefix=object_key).delete() logger.info("Permanently deleted all versions of object %s.", object_key) except ClientError: logger.exception("Couldn't delete all versions of %s.", object_key) raise
詩のスタンザをバージョニングされたオブジェクトにアップロードし、一連のアクションを実行します。
def usage_demo_single_object(obj_prefix='demo-versioning/'): """ Demonstrates usage of versioned object functions. This demo uploads a stanza of a poem and performs a series of revisions, deletions, and revivals on it. :param obj_prefix: The prefix to assign to objects created by this demo. """ with open('father_william.txt') as file: stanzas = file.read().split('\n\n') width = get_terminal_size((80, 20))[0] print('-'*width) print("Welcome to the usage demonstration of Amazon S3 versioning.") print("This demonstration uploads a single stanza of a poem to an Amazon " "S3 bucket and then applies various revisions to it.") print('-'*width) print("Creating a version-enabled bucket for the demo...") bucket = create_versioned_bucket('bucket-' + str(uuid.uuid1()), obj_prefix) print("\nThe initial version of our stanza:") print(stanzas[0]) # Add the first stanza and revise it a few times. print("\nApplying some revisions to the stanza...") obj_stanza_1 = bucket.Object(f'{obj_prefix}stanza-1') obj_stanza_1.put(Body=bytes(stanzas[0], 'utf-8')) obj_stanza_1.put(Body=bytes(stanzas[0].upper(), 'utf-8')) obj_stanza_1.put(Body=bytes(stanzas[0].lower(), 'utf-8')) obj_stanza_1.put(Body=bytes(stanzas[0][::-1], 'utf-8')) print("The latest version of the stanza is now:", obj_stanza_1.get()['Body'].read().decode('utf-8'), sep='\n') # Versions are returned in order, most recent first. obj_stanza_1_versions = bucket.object_versions.filter(Prefix=obj_stanza_1.key) print( "The version data of the stanza revisions:", *[f" {version.version_id}, last modified {version.last_modified}" for version in obj_stanza_1_versions], sep='\n' ) # Rollback two versions. print("\nRolling back two versions...") rollback_object(bucket, obj_stanza_1.key, list(obj_stanza_1_versions)[2].version_id) print("The latest version of the stanza:", obj_stanza_1.get()['Body'].read().decode('utf-8'), sep='\n') # Delete the stanza print("\nDeleting the stanza...") obj_stanza_1.delete() try: obj_stanza_1.get() except ClientError as error: if error.response['Error']['Code'] == 'NoSuchKey': print("The stanza is now deleted (as expected).") else: raise # Revive the stanza print("\nRestoring the stanza...") revive_object(bucket, obj_stanza_1.key) print("The stanza is restored! The latest version is again:", obj_stanza_1.get()['Body'].read().decode('utf-8'), sep='\n') # Permanently delete all versions of the object. This cannot be undone! print("\nPermanently deleting all versions of the stanza...") permanently_delete_object(bucket, obj_stanza_1.key) obj_stanza_1_versions = bucket.object_versions.filter(Prefix=obj_stanza_1.key) if len(list(obj_stanza_1_versions)) == 0: print("The stanza has been permanently deleted and now has no versions.") else: print("Something went wrong. The stanza still exists!") print(f"\nRemoving {bucket.name}...") bucket.delete() print(f"{bucket.name} deleted.") print("Demo done!")
-
API の詳細については、「AWS SDK for Python (Boto3) API リファレンス」の以下のトピックを参照してください。
-