Class CfnLocationHDFS

java.lang.Object
software.amazon.jsii.JsiiObject
All Implemented Interfaces:
IConstruct, IDependable, IInspectable, software.amazon.jsii.JsiiSerializable, software.constructs.IConstruct

@Generated(value="jsii-pacmak/1.84.0 (build 5404dcf)", date="2023-06-19T16:29:56.182Z") @Stability(Stable) public class CfnLocationHDFS extends CfnResource implements IInspectable
A CloudFormation AWS::DataSync::LocationHDFS.

The AWS::DataSync::LocationHDFS resource specifies an endpoint for a Hadoop Distributed File System (HDFS).

Example:

 // The code below shows an example of how to instantiate this type.
 // The values are placeholders you should change.
 import software.amazon.awscdk.services.datasync.*;
 CfnLocationHDFS cfnLocationHDFS = CfnLocationHDFS.Builder.create(this, "MyCfnLocationHDFS")
         .agentArns(List.of("agentArns"))
         .authenticationType("authenticationType")
         .nameNodes(List.of(NameNodeProperty.builder()
                 .hostname("hostname")
                 .port(123)
                 .build()))
         // the properties below are optional
         .blockSize(123)
         .kerberosKeytab("kerberosKeytab")
         .kerberosKrb5Conf("kerberosKrb5Conf")
         .kerberosPrincipal("kerberosPrincipal")
         .kmsKeyProviderUri("kmsKeyProviderUri")
         .qopConfiguration(QopConfigurationProperty.builder()
                 .dataTransferProtection("dataTransferProtection")
                 .rpcProtection("rpcProtection")
                 .build())
         .replicationFactor(123)
         .simpleUser("simpleUser")
         .subdirectory("subdirectory")
         .tags(List.of(CfnTag.builder()
                 .key("key")
                 .value("value")
                 .build()))
         .build();
 
  • Field Details

    • CFN_RESOURCE_TYPE_NAME

      @Stability(Stable) public static final String CFN_RESOURCE_TYPE_NAME
      The CloudFormation resource type name for this resource class.
  • Constructor Details

    • CfnLocationHDFS

      protected CfnLocationHDFS(software.amazon.jsii.JsiiObjectRef objRef)
    • CfnLocationHDFS

      protected CfnLocationHDFS(software.amazon.jsii.JsiiObject.InitializationMode initializationMode)
    • CfnLocationHDFS

      @Stability(Stable) public CfnLocationHDFS(@NotNull Construct scope, @NotNull String id, @NotNull CfnLocationHDFSProps props)
      Create a new AWS::DataSync::LocationHDFS.

      Parameters:
      scope -
      • scope in which this resource is defined.
      This parameter is required.
      id -
      • scoped id of the resource.
      This parameter is required.
      props -
      • resource properties.
      This parameter is required.
  • Method Details

    • inspect

      @Stability(Stable) public void inspect(@NotNull TreeInspector inspector)
      Examines the CloudFormation resource and discloses attributes.

      Specified by:
      inspect in interface IInspectable
      Parameters:
      inspector -
      • tree inspector to collect and process attributes.
      This parameter is required.
    • renderProperties

      @Stability(Stable) @NotNull protected Map<String,Object> renderProperties(@NotNull Map<String,Object> props)
      Overrides:
      renderProperties in class CfnResource
      Parameters:
      props - This parameter is required.
    • getAttrLocationArn

      @Stability(Stable) @NotNull public String getAttrLocationArn()
      The Amazon Resource Name (ARN) of the HDFS cluster location to describe.
    • getAttrLocationUri

      @Stability(Stable) @NotNull public String getAttrLocationUri()
      The URI of the HDFS cluster location.
    • getCfnProperties

      @Stability(Stable) @NotNull protected Map<String,Object> getCfnProperties()
      Overrides:
      getCfnProperties in class CfnResource
    • getTags

      @Stability(Stable) @NotNull public TagManager getTags()
      The key-value pair that represents the tag that you want to add to the location.

      The value can be an empty string. We recommend using tags to name your resources.

    • getAgentArns

      @Stability(Stable) @NotNull public List<String> getAgentArns()
      The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
    • setAgentArns

      @Stability(Stable) public void setAgentArns(@NotNull List<String> value)
      The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
    • getAuthenticationType

      @Stability(Stable) @NotNull public String getAuthenticationType()
      AWS::DataSync::LocationHDFS.AuthenticationType.
    • setAuthenticationType

      @Stability(Stable) public void setAuthenticationType(@NotNull String value)
      AWS::DataSync::LocationHDFS.AuthenticationType.
    • getNameNodes

      @Stability(Stable) @NotNull public Object getNameNodes()
      The NameNode that manages the HDFS namespace.

      The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.

    • setNameNodes

      @Stability(Stable) public void setNameNodes(@NotNull IResolvable value)
      The NameNode that manages the HDFS namespace.

      The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.

    • setNameNodes

      @Stability(Stable) public void setNameNodes(@NotNull List<Object> value)
      The NameNode that manages the HDFS namespace.

      The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.

    • getBlockSize

      @Stability(Stable) @Nullable public Number getBlockSize()
      The size of data blocks to write into the HDFS cluster.

      The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).

    • setBlockSize

      @Stability(Stable) public void setBlockSize(@Nullable Number value)
      The size of data blocks to write into the HDFS cluster.

      The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).

    • getKerberosKeytab

      @Stability(Stable) @Nullable public String getKerberosKeytab()
      The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys.

      Provide the base64-encoded file text. If KERBEROS is specified for AuthType , this value is required.

    • setKerberosKeytab

      @Stability(Stable) public void setKerberosKeytab(@Nullable String value)
      The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys.

      Provide the base64-encoded file text. If KERBEROS is specified for AuthType , this value is required.

    • getKerberosKrb5Conf

      @Stability(Stable) @Nullable public String getKerberosKrb5Conf()
      The krb5.conf file that contains the Kerberos configuration information. You can load the krb5.conf by providing a string of the file's contents or an Amazon S3 presigned URL of the file. If KERBEROS is specified for AuthType , this value is required.
    • setKerberosKrb5Conf

      @Stability(Stable) public void setKerberosKrb5Conf(@Nullable String value)
      The krb5.conf file that contains the Kerberos configuration information. You can load the krb5.conf by providing a string of the file's contents or an Amazon S3 presigned URL of the file. If KERBEROS is specified for AuthType , this value is required.
    • getKerberosPrincipal

      @Stability(Stable) @Nullable public String getKerberosPrincipal()
      The Kerberos principal with access to the files and folders on the HDFS cluster.

      If KERBEROS is specified for AuthenticationType , this parameter is required.

    • setKerberosPrincipal

      @Stability(Stable) public void setKerberosPrincipal(@Nullable String value)
      The Kerberos principal with access to the files and folders on the HDFS cluster.

      If KERBEROS is specified for AuthenticationType , this parameter is required.

    • getKmsKeyProviderUri

      @Stability(Stable) @Nullable public String getKmsKeyProviderUri()
      The URI of the HDFS cluster's Key Management Server (KMS).
    • setKmsKeyProviderUri

      @Stability(Stable) public void setKmsKeyProviderUri(@Nullable String value)
      The URI of the HDFS cluster's Key Management Server (KMS).
    • getQopConfiguration

      @Stability(Stable) @Nullable public Object getQopConfiguration()
      The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster.

      If QopConfiguration isn't specified, RpcProtection and DataTransferProtection default to PRIVACY . If you set RpcProtection or DataTransferProtection , the other parameter assumes the same value.

    • setQopConfiguration

      @Stability(Stable) public void setQopConfiguration(@Nullable IResolvable value)
      The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster.

      If QopConfiguration isn't specified, RpcProtection and DataTransferProtection default to PRIVACY . If you set RpcProtection or DataTransferProtection , the other parameter assumes the same value.

    • setQopConfiguration

      @Stability(Stable) public void setQopConfiguration(@Nullable CfnLocationHDFS.QopConfigurationProperty value)
      The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster.

      If QopConfiguration isn't specified, RpcProtection and DataTransferProtection default to PRIVACY . If you set RpcProtection or DataTransferProtection , the other parameter assumes the same value.

    • getReplicationFactor

      @Stability(Stable) @Nullable public Number getReplicationFactor()
      The number of DataNodes to replicate the data to when writing to the HDFS cluster.

      By default, data is replicated to three DataNodes.

    • setReplicationFactor

      @Stability(Stable) public void setReplicationFactor(@Nullable Number value)
      The number of DataNodes to replicate the data to when writing to the HDFS cluster.

      By default, data is replicated to three DataNodes.

    • getSimpleUser

      @Stability(Stable) @Nullable public String getSimpleUser()
      The user name used to identify the client on the host operating system.

      If SIMPLE is specified for AuthenticationType , this parameter is required.

    • setSimpleUser

      @Stability(Stable) public void setSimpleUser(@Nullable String value)
      The user name used to identify the client on the host operating system.

      If SIMPLE is specified for AuthenticationType , this parameter is required.

    • getSubdirectory

      @Stability(Stable) @Nullable public String getSubdirectory()
      A subdirectory in the HDFS cluster.

      This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to / .

    • setSubdirectory

      @Stability(Stable) public void setSubdirectory(@Nullable String value)
      A subdirectory in the HDFS cluster.

      This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to / .