- class aws_cdk.aws_ec2.InitSource(target_directory, service_handles=None)
Extract an archive into a directory.
# my_bucket: s3.Bucket handle = ec2.InitServiceRestartHandle() ec2.CloudFormationInit.from_elements( ec2.InitFile.from_string("/etc/nginx/nginx.conf", "...", service_restart_handles=[handle]), ec2.InitSource.from_s3_object("/var/www/html", my_bucket, "html.zip", service_restart_handles=[handle]), ec2.InitService.enable("nginx", service_restart_handle=handle ))
Returns the init element type for this element.
- classmethod from_asset(target_directory, path, *, service_restart_handles=None, deploy_time=None, readers=None, asset_hash=None, asset_hash_type=None, bundling=None, exclude=None, follow_symlinks=None, ignore_mode=None)
Create an InitSource from an asset created from the given path.
InitServiceRestartHandle]]) – Restart the given services after this archive has been extracted. Default: - Do not restart any service
bool]) – Whether or not the asset needs to exist beyond deployment time; i.e. are copied over to a different location and not needed afterwards. Setting this property to true has an impact on the lifecycle of the asset, because we will assume that it is safe to delete after the CloudFormation deployment succeeds. For example, Lambda Function assets are copied over to Lambda during deployment. Therefore, it is not necessary to store the asset in S3, so we consider those deployTime assets. Default: false
IGrantable]]) – A list of principals that should be able to read this asset from S3. You can use
asset.grantRead(principal)to grant read permissions later. Default: - No principals that can read file asset.
str]) – Specify a custom hash for this asset. If
assetHashTypeis set it must be set to
AssetHashType.CUSTOM. For consistency, this custom hash will be SHA256 hashed and encoded as hex. The resulting hash will be the asset hash. NOTE: the hash is used in order to identify a specific revision of the asset, and used for optimizing and caching deployment activities related to this asset such as packaging, uploading to Amazon S3, etc. If you chose to customize the hash, you will need to make sure it is updated every time the asset changes, or otherwise it is possible that some deployments will not be invalidated. Default: - based on
AssetHashType]) – Specifies the type of hash to calculate for this asset. If
assetHashis configured, this option must be
AssetHashType.CUSTOM. Default: - the default is
AssetHashType.SOURCE, but if
assetHashis explicitly specified this value defaults to
None]) – Bundle the asset by executing a command in a Docker container or a custom bundling provider. The asset path will be mounted at
/asset-input. The Docker container is responsible for putting content at
/asset-output. The content at
/asset-outputwill be zipped and used as the final asset. Default: - uploaded as-is to S3 if the asset is a regular file or a .zip file, archived into a .zip file and uploaded to S3 otherwise
str]]) – File paths matching the patterns will be excluded. See
ignoreModeto set the matching behavior. Has no effect on Assets bundled using the
bundlingproperty. Default: - nothing is excluded
SymlinkFollowMode]) – A strategy for how to handle symlinks. Default: SymlinkFollowMode.NEVER
IgnoreMode]) – The ignore behavior to use for
excludepatterns. Default: IgnoreMode.GLOB
- Return type:
- classmethod from_existing_asset(target_directory, asset, *, service_restart_handles=None)
Extract a directory from an existing directory asset.
- classmethod from_git_hub(target_directory, owner, repo, ref_spec=None, *, service_restart_handles=None)
Extract a GitHub branch into a given directory.
- classmethod from_s3_object(target_directory, bucket, key, *, service_restart_handles=None)
Extract an archive stored in an S3 bucket into the given directory.
- classmethod from_url(target_directory, url, *, service_restart_handles=None)
Retrieve a URL and extract it into the given directory.