@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class CreateLocationHdfsRequest extends AmazonWebServiceRequest implements Serializable, Cloneable
NOOP
Constructor and Description |
---|
CreateLocationHdfsRequest() |
Modifier and Type | Method and Description |
---|---|
CreateLocationHdfsRequest |
clone()
Creates a shallow clone of this object for all fields except the handler context.
|
boolean |
equals(Object obj) |
List<String> |
getAgentArns()
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
|
String |
getAuthenticationType()
The type of authentication used to determine the identity of the user.
|
Integer |
getBlockSize()
The size of data blocks to write into the HDFS cluster.
|
ByteBuffer |
getKerberosKeytab()
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted
keys.
|
ByteBuffer |
getKerberosKrb5Conf()
The
krb5.conf file that contains the Kerberos configuration information. |
String |
getKerberosPrincipal()
The Kerberos principal with access to the files and folders on the HDFS cluster.
|
String |
getKmsKeyProviderUri()
The URI of the HDFS cluster's Key Management Server (KMS).
|
List<HdfsNameNode> |
getNameNodes()
The NameNode that manages the HDFS namespace.
|
QopConfiguration |
getQopConfiguration()
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer
protection settings configured on the Hadoop Distributed File System (HDFS) cluster.
|
Integer |
getReplicationFactor()
The number of DataNodes to replicate the data to when writing to the HDFS cluster.
|
String |
getSimpleUser()
The user name used to identify the client on the host operating system.
|
String |
getSubdirectory()
A subdirectory in the HDFS cluster.
|
List<TagListEntry> |
getTags()
The key-value pair that represents the tag that you want to add to the location.
|
int |
hashCode() |
void |
setAgentArns(Collection<String> agentArns)
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
|
void |
setAuthenticationType(String authenticationType)
The type of authentication used to determine the identity of the user.
|
void |
setBlockSize(Integer blockSize)
The size of data blocks to write into the HDFS cluster.
|
void |
setKerberosKeytab(ByteBuffer kerberosKeytab)
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted
keys.
|
void |
setKerberosKrb5Conf(ByteBuffer kerberosKrb5Conf)
The
krb5.conf file that contains the Kerberos configuration information. |
void |
setKerberosPrincipal(String kerberosPrincipal)
The Kerberos principal with access to the files and folders on the HDFS cluster.
|
void |
setKmsKeyProviderUri(String kmsKeyProviderUri)
The URI of the HDFS cluster's Key Management Server (KMS).
|
void |
setNameNodes(Collection<HdfsNameNode> nameNodes)
The NameNode that manages the HDFS namespace.
|
void |
setQopConfiguration(QopConfiguration qopConfiguration)
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer
protection settings configured on the Hadoop Distributed File System (HDFS) cluster.
|
void |
setReplicationFactor(Integer replicationFactor)
The number of DataNodes to replicate the data to when writing to the HDFS cluster.
|
void |
setSimpleUser(String simpleUser)
The user name used to identify the client on the host operating system.
|
void |
setSubdirectory(String subdirectory)
A subdirectory in the HDFS cluster.
|
void |
setTags(Collection<TagListEntry> tags)
The key-value pair that represents the tag that you want to add to the location.
|
String |
toString()
Returns a string representation of this object.
|
CreateLocationHdfsRequest |
withAgentArns(Collection<String> agentArns)
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
|
CreateLocationHdfsRequest |
withAgentArns(String... agentArns)
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
|
CreateLocationHdfsRequest |
withAuthenticationType(HdfsAuthenticationType authenticationType)
The type of authentication used to determine the identity of the user.
|
CreateLocationHdfsRequest |
withAuthenticationType(String authenticationType)
The type of authentication used to determine the identity of the user.
|
CreateLocationHdfsRequest |
withBlockSize(Integer blockSize)
The size of data blocks to write into the HDFS cluster.
|
CreateLocationHdfsRequest |
withKerberosKeytab(ByteBuffer kerberosKeytab)
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted
keys.
|
CreateLocationHdfsRequest |
withKerberosKrb5Conf(ByteBuffer kerberosKrb5Conf)
The
krb5.conf file that contains the Kerberos configuration information. |
CreateLocationHdfsRequest |
withKerberosPrincipal(String kerberosPrincipal)
The Kerberos principal with access to the files and folders on the HDFS cluster.
|
CreateLocationHdfsRequest |
withKmsKeyProviderUri(String kmsKeyProviderUri)
The URI of the HDFS cluster's Key Management Server (KMS).
|
CreateLocationHdfsRequest |
withNameNodes(Collection<HdfsNameNode> nameNodes)
The NameNode that manages the HDFS namespace.
|
CreateLocationHdfsRequest |
withNameNodes(HdfsNameNode... nameNodes)
The NameNode that manages the HDFS namespace.
|
CreateLocationHdfsRequest |
withQopConfiguration(QopConfiguration qopConfiguration)
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer
protection settings configured on the Hadoop Distributed File System (HDFS) cluster.
|
CreateLocationHdfsRequest |
withReplicationFactor(Integer replicationFactor)
The number of DataNodes to replicate the data to when writing to the HDFS cluster.
|
CreateLocationHdfsRequest |
withSimpleUser(String simpleUser)
The user name used to identify the client on the host operating system.
|
CreateLocationHdfsRequest |
withSubdirectory(String subdirectory)
A subdirectory in the HDFS cluster.
|
CreateLocationHdfsRequest |
withTags(Collection<TagListEntry> tags)
The key-value pair that represents the tag that you want to add to the location.
|
CreateLocationHdfsRequest |
withTags(TagListEntry... tags)
The key-value pair that represents the tag that you want to add to the location.
|
addHandlerContext, getCloneRoot, getCloneSource, getCustomQueryParameters, getCustomRequestHeaders, getGeneralProgressListener, getHandlerContext, getReadLimit, getRequestClientOptions, getRequestCredentials, getRequestCredentialsProvider, getRequestMetricCollector, getSdkClientExecutionTimeout, getSdkRequestTimeout, putCustomQueryParameter, putCustomRequestHeader, setGeneralProgressListener, setRequestCredentials, setRequestCredentialsProvider, setRequestMetricCollector, setSdkClientExecutionTimeout, setSdkRequestTimeout, withGeneralProgressListener, withRequestCredentialsProvider, withRequestMetricCollector, withSdkClientExecutionTimeout, withSdkRequestTimeout
public void setSubdirectory(String subdirectory)
A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS
cluster. If the subdirectory isn't specified, it will default to /
.
subdirectory
- A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS
cluster. If the subdirectory isn't specified, it will default to /
.public String getSubdirectory()
A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS
cluster. If the subdirectory isn't specified, it will default to /
.
/
.public CreateLocationHdfsRequest withSubdirectory(String subdirectory)
A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS
cluster. If the subdirectory isn't specified, it will default to /
.
subdirectory
- A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS
cluster. If the subdirectory isn't specified, it will default to /
.public List<HdfsNameNode> getNameNodes()
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
public void setNameNodes(Collection<HdfsNameNode> nameNodes)
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
nameNodes
- The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing,
and renaming files and directories. The NameNode contains the information to map blocks of data to the
DataNodes. You can use only one NameNode.public CreateLocationHdfsRequest withNameNodes(HdfsNameNode... nameNodes)
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
NOTE: This method appends the values to the existing list (if any). Use
setNameNodes(java.util.Collection)
or withNameNodes(java.util.Collection)
if you want to
override the existing values.
nameNodes
- The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing,
and renaming files and directories. The NameNode contains the information to map blocks of data to the
DataNodes. You can use only one NameNode.public CreateLocationHdfsRequest withNameNodes(Collection<HdfsNameNode> nameNodes)
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
nameNodes
- The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing,
and renaming files and directories. The NameNode contains the information to map blocks of data to the
DataNodes. You can use only one NameNode.public void setBlockSize(Integer blockSize)
The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).
blockSize
- The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes.
The default block size is 128 mebibytes (MiB).public Integer getBlockSize()
The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).
public CreateLocationHdfsRequest withBlockSize(Integer blockSize)
The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).
blockSize
- The size of data blocks to write into the HDFS cluster. The block size must be a multiple of 512 bytes.
The default block size is 128 mebibytes (MiB).public void setReplicationFactor(Integer replicationFactor)
The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.
replicationFactor
- The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is
replicated to three DataNodes.public Integer getReplicationFactor()
The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.
public CreateLocationHdfsRequest withReplicationFactor(Integer replicationFactor)
The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.
replicationFactor
- The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is
replicated to three DataNodes.public void setKmsKeyProviderUri(String kmsKeyProviderUri)
The URI of the HDFS cluster's Key Management Server (KMS).
kmsKeyProviderUri
- The URI of the HDFS cluster's Key Management Server (KMS).public String getKmsKeyProviderUri()
The URI of the HDFS cluster's Key Management Server (KMS).
public CreateLocationHdfsRequest withKmsKeyProviderUri(String kmsKeyProviderUri)
The URI of the HDFS cluster's Key Management Server (KMS).
kmsKeyProviderUri
- The URI of the HDFS cluster's Key Management Server (KMS).public void setQopConfiguration(QopConfiguration qopConfiguration)
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer
protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If
QopConfiguration
isn't specified, RpcProtection
and DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
,
the other parameter assumes the same value.
qopConfiguration
- The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer
protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If
QopConfiguration
isn't specified, RpcProtection
and
DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
, the other parameter assumes the same value.public QopConfiguration getQopConfiguration()
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer
protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If
QopConfiguration
isn't specified, RpcProtection
and DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
,
the other parameter assumes the same value.
QopConfiguration
isn't specified, RpcProtection
and
DataTransferProtection
default to PRIVACY
. If you set
RpcProtection
or DataTransferProtection
, the other parameter assumes the same
value.public CreateLocationHdfsRequest withQopConfiguration(QopConfiguration qopConfiguration)
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer
protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If
QopConfiguration
isn't specified, RpcProtection
and DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
,
the other parameter assumes the same value.
qopConfiguration
- The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer
protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If
QopConfiguration
isn't specified, RpcProtection
and
DataTransferProtection
default to PRIVACY
. If you set RpcProtection
or DataTransferProtection
, the other parameter assumes the same value.public void setAuthenticationType(String authenticationType)
The type of authentication used to determine the identity of the user.
authenticationType
- The type of authentication used to determine the identity of the user.HdfsAuthenticationType
public String getAuthenticationType()
The type of authentication used to determine the identity of the user.
HdfsAuthenticationType
public CreateLocationHdfsRequest withAuthenticationType(String authenticationType)
The type of authentication used to determine the identity of the user.
authenticationType
- The type of authentication used to determine the identity of the user.HdfsAuthenticationType
public CreateLocationHdfsRequest withAuthenticationType(HdfsAuthenticationType authenticationType)
The type of authentication used to determine the identity of the user.
authenticationType
- The type of authentication used to determine the identity of the user.HdfsAuthenticationType
public void setSimpleUser(String simpleUser)
The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
simpleUser
- The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
public String getSimpleUser()
The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
public CreateLocationHdfsRequest withSimpleUser(String simpleUser)
The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
simpleUser
- The user name used to identify the client on the host operating system.
If SIMPLE
is specified for AuthenticationType
, this parameter is required.
public void setKerberosPrincipal(String kerberosPrincipal)
The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
kerberosPrincipal
- The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
public String getKerberosPrincipal()
The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
public CreateLocationHdfsRequest withKerberosPrincipal(String kerberosPrincipal)
The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
kerberosPrincipal
- The Kerberos principal with access to the files and folders on the HDFS cluster.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
public void setKerberosKeytab(ByteBuffer kerberosKeytab)
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
The AWS SDK for Java performs a Base64 encoding on this field before sending this request to the AWS service. Users of the SDK should not perform Base64 encoding on this field.
Warning: ByteBuffers returned by the SDK are mutable. Changes to the content or position of the byte buffer will be seen by all objects that have a reference to this object. It is recommended to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before using or reading from the buffer. This behavior will be changed in a future major version of the SDK.
kerberosKeytab
- The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the
encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the
CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
public ByteBuffer getKerberosKeytab()
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
ByteBuffer
s are stateful. Calling their get
methods changes their position
. We recommend
using ByteBuffer.asReadOnlyBuffer()
to create a read-only view of the buffer with an independent
position
, and calling get
methods on this rather than directly on the returned ByteBuffer
.
Doing so will ensure that anyone else using the ByteBuffer
will not be affected by changes to the
position
.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
public CreateLocationHdfsRequest withKerberosKeytab(ByteBuffer kerberosKeytab)
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
The AWS SDK for Java performs a Base64 encoding on this field before sending this request to the AWS service. Users of the SDK should not perform Base64 encoding on this field.
Warning: ByteBuffers returned by the SDK are mutable. Changes to the content or position of the byte buffer will be seen by all objects that have a reference to this object. It is recommended to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before using or reading from the buffer. This behavior will be changed in a future major version of the SDK.
kerberosKeytab
- The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the
encrypted keys. You can load the keytab from a file by providing the file's address. If you're using the
CLI, it performs base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
public void setKerberosKrb5Conf(ByteBuffer kerberosKrb5Conf)
The krb5.conf
file that contains the Kerberos configuration information. You can load the
krb5.conf
file by providing the file's address. If you're using the CLI, it performs the base64
encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
The AWS SDK for Java performs a Base64 encoding on this field before sending this request to the AWS service. Users of the SDK should not perform Base64 encoding on this field.
Warning: ByteBuffers returned by the SDK are mutable. Changes to the content or position of the byte buffer will be seen by all objects that have a reference to this object. It is recommended to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before using or reading from the buffer. This behavior will be changed in a future major version of the SDK.
kerberosKrb5Conf
- The krb5.conf
file that contains the Kerberos configuration information. You can load the
krb5.conf
file by providing the file's address. If you're using the CLI, it performs the
base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
public ByteBuffer getKerberosKrb5Conf()
The krb5.conf
file that contains the Kerberos configuration information. You can load the
krb5.conf
file by providing the file's address. If you're using the CLI, it performs the base64
encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
ByteBuffer
s are stateful. Calling their get
methods changes their position
. We recommend
using ByteBuffer.asReadOnlyBuffer()
to create a read-only view of the buffer with an independent
position
, and calling get
methods on this rather than directly on the returned ByteBuffer
.
Doing so will ensure that anyone else using the ByteBuffer
will not be affected by changes to the
position
.
krb5.conf
file that contains the Kerberos configuration information. You can load the
krb5.conf
file by providing the file's address. If you're using the CLI, it performs the
base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
public CreateLocationHdfsRequest withKerberosKrb5Conf(ByteBuffer kerberosKrb5Conf)
The krb5.conf
file that contains the Kerberos configuration information. You can load the
krb5.conf
file by providing the file's address. If you're using the CLI, it performs the base64
encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
The AWS SDK for Java performs a Base64 encoding on this field before sending this request to the AWS service. Users of the SDK should not perform Base64 encoding on this field.
Warning: ByteBuffers returned by the SDK are mutable. Changes to the content or position of the byte buffer will be seen by all objects that have a reference to this object. It is recommended to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before using or reading from the buffer. This behavior will be changed in a future major version of the SDK.
kerberosKrb5Conf
- The krb5.conf
file that contains the Kerberos configuration information. You can load the
krb5.conf
file by providing the file's address. If you're using the CLI, it performs the
base64 encoding for you. Otherwise, provide the base64-encoded text.
If KERBEROS
is specified for AuthenticationType
, this parameter is required.
public List<String> getAgentArns()
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
public void setAgentArns(Collection<String> agentArns)
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
agentArns
- The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.public CreateLocationHdfsRequest withAgentArns(String... agentArns)
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
NOTE: This method appends the values to the existing list (if any). Use
setAgentArns(java.util.Collection)
or withAgentArns(java.util.Collection)
if you want to
override the existing values.
agentArns
- The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.public CreateLocationHdfsRequest withAgentArns(Collection<String> agentArns)
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.
agentArns
- The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.public List<TagListEntry> getTags()
The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
public void setTags(Collection<TagListEntry> tags)
The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
tags
- The key-value pair that represents the tag that you want to add to the location. The value can be an empty
string. We recommend using tags to name your resources.public CreateLocationHdfsRequest withTags(TagListEntry... tags)
The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
NOTE: This method appends the values to the existing list (if any). Use
setTags(java.util.Collection)
or withTags(java.util.Collection)
if you want to override the
existing values.
tags
- The key-value pair that represents the tag that you want to add to the location. The value can be an empty
string. We recommend using tags to name your resources.public CreateLocationHdfsRequest withTags(Collection<TagListEntry> tags)
The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
tags
- The key-value pair that represents the tag that you want to add to the location. The value can be an empty
string. We recommend using tags to name your resources.public String toString()
toString
in class Object
Object.toString()
public CreateLocationHdfsRequest clone()
AmazonWebServiceRequest
clone
in class AmazonWebServiceRequest
Object.clone()