Create a bulk import job (AWS CLI) - AWS IoT SiteWise

Create a bulk import job (AWS CLI)

You can use the CreateBulkImportJob API operation to transfer data from Amazon S3 to AWS IoT SiteWise. The following example uses AWS CLI.

Important

You must enable AWS IoT SiteWise to export data to Amazon S3 before you can create a bulk import job. For more information about how to configure storage settings, see Configuring storage settings.

Run the following command. Replace file-name with the name of the file that contains the bulk import job configuration.

aws iotsitewise create-bulk-import-job --cli-input-json file://file-name.json
Example bulk import job configuration
  • Replace error-bucket with the name of the Amazon S3 bucket to which errors associated with this bulk import job are sent.

  • Replace error-bucket-prefix with the prefix of the Amazon S3 bucket to which errors associated with this bulk import job are sent.

    Amazon S3 uses the prefix as a folder name to organize data in the bucket. Each Amazon S3 object has a key that is its unique identifier in the bucket. Each object in a bucket has exactly one key. The prefix must end with a forward slash (/). For more information, see Organizing objects using prefixes in the Amazon Simple Storage Service User Guide.

  • Replace data-bucket with the name of the Amazon S3 bucket from which data is imported.

  • Replace data-bucket-key with the key of the Amazon S3 object that contains your data. Each object has a key that is a unique identifier. Each object has exactly one key.

  • Replace data-bucket-version-id with the version ID to identify a specific version of the Amazon S3 object that contains your data. This parameter is optional.

  • Replace column-name with the column name specified in the .csv file.

  • Replace job-name with a unique name that identifies the bulk import job.

  • Replace job-role-arn with the IAM role that allows AWS IoT SiteWise to read Amazon S3 data.

    Note

    Make sure that your role has the permissions shown in the following example. Replace data-bucket with the name of the Amazon S3 bucket that contains your data and error-bucket with the name of the Amazon S3 bucket to which errors associated with this bulk import job are sent.

    { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:GetObject", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::data-bucket", "arn:aws:s3:::data-bucket/*", ], "Effect": "Allow" }, { "Action": [ "s3:PutObject", "s3:GetObject", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::error-bucket", "arn:aws:s3:::error-bucket/*" ], "Effect": "Allow" } ] }
{ "errorReportLocation": { "bucket": "error-bucket", "prefix": "error-bucket-prefix" }, "files": [ { "bucket": "data-bucket", "key": "data-bucket-key", "versionId": "data-bucket-version-id" } ], "jobConfiguration": { "fileFormat": { "csv": { "columnNames": [ "column-name" ] } } }, "jobName": "job-name", "jobRoleArn": "job-role-arn" }
Example response
{ "jobId":"f8c031d0-01d1-4b94-90b1-afe8bb93b7e5", "jobStatus":"PENDING", "jobName":"myBulkImportJob" }