CfnLocationHDFS
- class aws_cdk.aws_datasync.CfnLocationHDFS(scope, id, *, agent_arns, authentication_type, name_nodes, block_size=None, kerberos_keytab=None, kerberos_krb5_conf=None, kerberos_principal=None, kms_key_provider_uri=None, qop_configuration=None, replication_factor=None, simple_user=None, subdirectory=None, tags=None)
Bases:
CfnResource
The
AWS::DataSync::LocationHDFS
resource specifies an endpoint for a Hadoop Distributed File System (HDFS).- See:
- CloudformationResource:
AWS::DataSync::LocationHDFS
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_datasync as datasync cfn_location_hDFS = datasync.CfnLocationHDFS(self, "MyCfnLocationHDFS", agent_arns=["agentArns"], authentication_type="authenticationType", name_nodes=[datasync.CfnLocationHDFS.NameNodeProperty( hostname="hostname", port=123 )], # the properties below are optional block_size=123, kerberos_keytab="kerberosKeytab", kerberos_krb5_conf="kerberosKrb5Conf", kerberos_principal="kerberosPrincipal", kms_key_provider_uri="kmsKeyProviderUri", qop_configuration=datasync.CfnLocationHDFS.QopConfigurationProperty( data_transfer_protection="dataTransferProtection", rpc_protection="rpcProtection" ), replication_factor=123, simple_user="simpleUser", subdirectory="subdirectory", tags=[CfnTag( key="key", value="value" )] )
- Parameters:
scope (
Construct
) – Scope in which this resource is defined.id (
str
) – Construct identifier for this resource (unique in its scope).agent_arns (
Sequence
[str
]) – The Amazon Resource Names (ARNs) of the DataSync agents that can connect to your HDFS cluster.authentication_type (
str
) – The authentication mode used to determine identity of user.name_nodes (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,NameNodeProperty
,Dict
[str
,Any
]]]]) – The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.block_size (
Union
[int
,float
,None
]) – The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).kerberos_keytab (
Optional
[str
]) – The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. Provide the base64-encoded file text. IfKERBEROS
is specified forAuthType
, this value is required.kerberos_krb5_conf (
Optional
[str
]) – Thekrb5.conf
file that contains the Kerberos configuration information. You can load thekrb5.conf
by providing a string of the file’s contents or an Amazon S3 presigned URL of the file. IfKERBEROS
is specified forAuthType
, this value is required.kerberos_principal (
Optional
[str
]) – The Kerberos principal with access to the files and folders on the HDFS cluster. .. epigraph:: IfKERBEROS
is specified forAuthenticationType
, this parameter is required.kms_key_provider_uri (
Optional
[str
]) – The URI of the HDFS cluster’s Key Management Server (KMS).qop_configuration (
Union
[IResolvable
,QopConfigurationProperty
,Dict
[str
,Any
],None
]) – The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. IfQopConfiguration
isn’t specified,RpcProtection
andDataTransferProtection
default toPRIVACY
. If you setRpcProtection
orDataTransferProtection
, the other parameter assumes the same value.replication_factor (
Union
[int
,float
,None
]) – The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes. Default: - 3simple_user (
Optional
[str
]) – The user name used to identify the client on the host operating system. .. epigraph:: IfSIMPLE
is specified forAuthenticationType
, this parameter is required.subdirectory (
Optional
[str
]) – A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn’t specified, it will default to/
.tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) – The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_dependency(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- add_depends_on(target)
(deprecated) Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
- Parameters:
target (
CfnResource
) –- Deprecated:
use addDependency
- Stability:
deprecated
- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –value (
Any
) –
- See:
- Return type:
None
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
). In some cases, a snapshot can be taken of the resource prior to deletion (RemovalPolicy.SNAPSHOT
). A list of resources that support this policy can be found in the following link:- Parameters:
policy (
Optional
[RemovalPolicy
]) –apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resource, please consult that specific resource’s documentation.
- See:
- Return type:
None
- get_att(attribute_name, type_hint=None)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.type_hint (
Optional
[ResolutionTypeHint
]) –
- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –- See:
- Return type:
Any
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) – tree inspector to collect and process attributes.- Return type:
None
- obtain_dependencies()
Retrieves an array of resources this resource depends on.
This assembles dependencies on resources across stacks (including nested stacks) automatically.
- Return type:
List
[Union
[Stack
,CfnResource
]]
- obtain_resource_dependencies()
Get a shallow copy of dependencies between this resource and other resources in the same stack.
- Return type:
List
[CfnResource
]
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- remove_dependency(target)
Indicates that this resource no longer depends on another resource.
This can be used for resources across stacks (including nested stacks) and the dependency will automatically be removed from the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- replace_dependency(target, new_target)
Replaces one dependency with another.
- Parameters:
target (
CfnResource
) – The dependency to replace.new_target (
CfnResource
) – The new dependency to add.
- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::DataSync::LocationHDFS'
- agent_arns
The Amazon Resource Names (ARNs) of the DataSync agents that can connect to your HDFS cluster.
- attr_location_arn
The Amazon Resource Name (ARN) of the HDFS cluster location to describe.
- CloudformationAttribute:
LocationArn
- attr_location_uri
The URI of the HDFS cluster location.
- CloudformationAttribute:
LocationUri
- authentication_type
The authentication mode used to determine identity of user.
- block_size
The size of data blocks to write into the HDFS cluster.
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- kerberos_keytab
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys.
- kerberos_krb5_conf
The
krb5.conf
file that contains the Kerberos configuration information. You can load thekrb5.conf
by providing a string of the file’s contents or an Amazon S3 presigned URL of the file. IfKERBEROS
is specified forAuthType
, this value is required.
- kerberos_principal
The Kerberos principal with access to the files and folders on the HDFS cluster.
- kms_key_provider_uri
The URI of the HDFS cluster’s Key Management Server (KMS).
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- name_nodes
The NameNode that manages the HDFS namespace.
- node
The tree node.
- qop_configuration
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- replication_factor
The number of DataNodes to replicate the data to when writing to the HDFS cluster.
- simple_user
The user name used to identify the client on the host operating system.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- subdirectory
A subdirectory in the HDFS cluster.
- tags
Tag Manager which manages the tags for this resource.
- tags_raw
The key-value pair that represents the tag that you want to add to the location.
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
) –- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(x)
Check whether the given object is a CfnResource.
- Parameters:
x (
Any
) –- Return type:
bool
- classmethod is_construct(x)
Checks if
x
is a construct.Use this method instead of
instanceof
to properly detectConstruct
instances, even when the construct library is symlinked.Explanation: in JavaScript, multiple copies of the
constructs
library on disk are seen as independent, completely different libraries. As a consequence, the classConstruct
in each copy of theconstructs
library is seen as a different class, and an instance of one class will not test asinstanceof
the other class.npm install
will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of theconstructs
library can be accidentally installed, andinstanceof
will behave unpredictably. It is safest to avoid usinginstanceof
, and using this type-testing method instead.- Parameters:
x (
Any
) – Any object.- Return type:
bool
- Returns:
true if
x
is an object created from a class which extendsConstruct
.
NameNodeProperty
- class CfnLocationHDFS.NameNodeProperty(*, hostname, port)
Bases:
object
The NameNode of the Hadoop Distributed File System (HDFS).
The NameNode manages the file system’s namespace and performs operations such as opening, closing, and renaming files and directories. The NameNode also contains the information to map blocks of data to the DataNodes.
- Parameters:
hostname (
str
) – The hostname of the NameNode in the HDFS cluster. This value is the IP address or Domain Name Service (DNS) name of the NameNode. An agent that’s installed on-premises uses this hostname to communicate with the NameNode in the network.port (
Union
[int
,float
]) – The port that the NameNode uses to listen to client requests.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_datasync as datasync name_node_property = datasync.CfnLocationHDFS.NameNodeProperty( hostname="hostname", port=123 )
Attributes
- hostname
The hostname of the NameNode in the HDFS cluster.
This value is the IP address or Domain Name Service (DNS) name of the NameNode. An agent that’s installed on-premises uses this hostname to communicate with the NameNode in the network.
- port
The port that the NameNode uses to listen to client requests.
QopConfigurationProperty
- class CfnLocationHDFS.QopConfigurationProperty(*, data_transfer_protection=None, rpc_protection=None)
Bases:
object
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer privacy settings configured on the Hadoop Distributed File System (HDFS) cluster.
- Parameters:
data_transfer_protection (
Optional
[str
]) – The data transfer protection setting configured on the HDFS cluster. This setting corresponds to yourdfs.data.transfer.protection
setting in thehdfs-site.xml
file on your Hadoop cluster. Default: - “PRIVACY”rpc_protection (
Optional
[str
]) – The Remote Procedure Call (RPC) protection setting configured on the HDFS cluster. This setting corresponds to yourhadoop.rpc.protection
setting in yourcore-site.xml
file on your Hadoop cluster. Default: - “PRIVACY”
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_datasync as datasync qop_configuration_property = datasync.CfnLocationHDFS.QopConfigurationProperty( data_transfer_protection="dataTransferProtection", rpc_protection="rpcProtection" )
Attributes
- data_transfer_protection
The data transfer protection setting configured on the HDFS cluster.
This setting corresponds to your
dfs.data.transfer.protection
setting in thehdfs-site.xml
file on your Hadoop cluster.
- rpc_protection
The Remote Procedure Call (RPC) protection setting configured on the HDFS cluster.
This setting corresponds to your
hadoop.rpc.protection
setting in yourcore-site.xml
file on your Hadoop cluster.