@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class DataSource extends Object implements Serializable, Cloneable, StructuredPojo
Describes the data source that contains the data to upload to a dataset, or the list of records to delete from Amazon Personalize.
Constructor and Description |
---|
DataSource() |
Modifier and Type | Method and Description |
---|---|
DataSource |
clone() |
boolean |
equals(Object obj) |
String |
getDataLocation()
For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset
is stored.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setDataLocation(String dataLocation)
For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset
is stored.
|
String |
toString()
Returns a string representation of this object.
|
DataSource |
withDataLocation(String dataLocation)
For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset
is stored.
|
public void setDataLocation(String dataLocation)
For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete.
For example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to
consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize
uses all files in the folder and any sub folder. Use the following syntax with a /
after the folder
name:
s3://bucket-name/folder-name/
dataLocation
- For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your
dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of
records to delete.
For example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion
job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon
Personalize uses all files in the folder and any sub folder. Use the following syntax with a
/
after the folder name:
s3://bucket-name/folder-name/
public String getDataLocation()
For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete.
For example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to
consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize
uses all files in the folder and any sub folder. Use the following syntax with a /
after the folder
name:
s3://bucket-name/folder-name/
For example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion
job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon
Personalize uses all files in the folder and any sub folder. Use the following syntax with a
/
after the folder name:
s3://bucket-name/folder-name/
public DataSource withDataLocation(String dataLocation)
For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete.
For example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to
consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize
uses all files in the folder and any sub folder. Use the following syntax with a /
after the folder
name:
s3://bucket-name/folder-name/
dataLocation
- For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your
dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of
records to delete.
For example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion
job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon
Personalize uses all files in the folder and any sub folder. Use the following syntax with a
/
after the folder name:
s3://bucket-name/folder-name/
public String toString()
toString
in class Object
Object.toString()
public DataSource clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.