Use StartTopicsDetectionJob with an AWS SDK or CLI - AWS SDK Code Examples

There are more AWS SDK examples available in the AWS Doc SDK Examples GitHub repo.

Use StartTopicsDetectionJob with an AWS SDK or CLI

The following code examples show how to use StartTopicsDetectionJob.

Action examples are code excerpts from larger programs and must be run in context. You can see this action in context in the following code example:

.NET
AWS SDK for .NET
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

using System; using System.Threading.Tasks; using Amazon.Comprehend; using Amazon.Comprehend.Model; /// <summary> /// This example scans the documents in an Amazon Simple Storage Service /// (Amazon S3) bucket and analyzes it for topics. The results are stored /// in another bucket and then the resulting job properties are displayed /// on the screen. This example was created using the AWS SDK for .NEt /// version 3.7 and .NET Core version 5.0. /// </summary> public static class TopicModeling { /// <summary> /// This methos calls a topic detection job by calling the Amazon /// Comprehend StartTopicsDetectionJobRequest. /// </summary> public static async Task Main() { var comprehendClient = new AmazonComprehendClient(); string inputS3Uri = "s3://input bucket/input path"; InputFormat inputDocFormat = InputFormat.ONE_DOC_PER_FILE; string outputS3Uri = "s3://output bucket/output path"; string dataAccessRoleArn = "arn:aws:iam::account ID:role/data access role"; int numberOfTopics = 10; var startTopicsDetectionJobRequest = new StartTopicsDetectionJobRequest() { InputDataConfig = new InputDataConfig() { S3Uri = inputS3Uri, InputFormat = inputDocFormat, }, OutputDataConfig = new OutputDataConfig() { S3Uri = outputS3Uri, }, DataAccessRoleArn = dataAccessRoleArn, NumberOfTopics = numberOfTopics, }; var startTopicsDetectionJobResponse = await comprehendClient.StartTopicsDetectionJobAsync(startTopicsDetectionJobRequest); var jobId = startTopicsDetectionJobResponse.JobId; Console.WriteLine("JobId: " + jobId); var describeTopicsDetectionJobRequest = new DescribeTopicsDetectionJobRequest() { JobId = jobId, }; var describeTopicsDetectionJobResponse = await comprehendClient.DescribeTopicsDetectionJobAsync(describeTopicsDetectionJobRequest); PrintJobProperties(describeTopicsDetectionJobResponse.TopicsDetectionJobProperties); var listTopicsDetectionJobsResponse = await comprehendClient.ListTopicsDetectionJobsAsync(new ListTopicsDetectionJobsRequest()); foreach (var props in listTopicsDetectionJobsResponse.TopicsDetectionJobPropertiesList) { PrintJobProperties(props); } } /// <summary> /// This method is a helper method that displays the job properties /// from the call to StartTopicsDetectionJobRequest. /// </summary> /// <param name="props">A list of properties from the call to /// StartTopicsDetectionJobRequest.</param> private static void PrintJobProperties(TopicsDetectionJobProperties props) { Console.WriteLine($"JobId: {props.JobId}, JobName: {props.JobName}, JobStatus: {props.JobStatus}"); Console.WriteLine($"NumberOfTopics: {props.NumberOfTopics}\nInputS3Uri: {props.InputDataConfig.S3Uri}"); Console.WriteLine($"InputFormat: {props.InputDataConfig.InputFormat}, OutputS3Uri: {props.OutputDataConfig.S3Uri}"); } }
CLI
AWS CLI

To start a topics detection analysis job

The following start-topics-detection-job example starts an asynchronous topics detection job for all files located at the address specified by the --input-data-config tag. When the job is complete, the folder, output, is placed at the location specified by the --ouput-data-config tag. output contains topic-terms.csv and doc-topics.csv. The first output file, topic-terms.csv, is a list of topics in the collection. For each topic, the list includes, by default, the top terms by topic according to their weight. The second file, doc-topics.csv, lists the documents associated with a topic and the proportion of the document that is concerned with the topic.

aws comprehend start-topics-detection-job \ --job-name example_topics_detection_job \ --language-code en \ --input-data-config "S3Uri=s3://DOC-EXAMPLE-BUCKET/" \ --output-data-config "S3Uri=s3://DOC-EXAMPLE-DESTINATION-BUCKET/testfolder/" \ --data-access-role-arn arn:aws:iam::111122223333:role/service-role/AmazonComprehendServiceRole-example-role \ --language-code en

Output:

{ "JobId": "123456abcdeb0e11022f22a11EXAMPLE", "JobArn": "arn:aws:comprehend:us-west-2:111122223333:key-phrases-detection-job/123456abcdeb0e11022f22a11EXAMPLE", "JobStatus": "SUBMITTED" }

For more information, see Topic Modeling in the Amazon Comprehend Developer Guide.

Python
SDK for Python (Boto3)
Note

There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.

class ComprehendTopicModeler: """Encapsulates a Comprehend topic modeler.""" def __init__(self, comprehend_client): """ :param comprehend_client: A Boto3 Comprehend client. """ self.comprehend_client = comprehend_client def start_job( self, job_name, input_bucket, input_key, input_format, output_bucket, output_key, data_access_role_arn, ): """ Starts a topic modeling job. Input is read from the specified Amazon S3 input bucket and written to the specified output bucket. Output data is stored in a tar archive compressed in gzip format. The job runs asynchronously, so you can call `describe_topics_detection_job` to get job status until it returns a status of SUCCEEDED. :param job_name: The name of the job. :param input_bucket: An Amazon S3 bucket that contains job input. :param input_key: The prefix used to find input data in the input bucket. If multiple objects have the same prefix, all of them are used. :param input_format: The format of the input data, either one document per file or one document per line. :param output_bucket: The Amazon S3 bucket where output data is written. :param output_key: The prefix prepended to the output data. :param data_access_role_arn: The Amazon Resource Name (ARN) of a role that grants Comprehend permission to read from the input bucket and write to the output bucket. :return: Information about the job, including the job ID. """ try: response = self.comprehend_client.start_topics_detection_job( JobName=job_name, DataAccessRoleArn=data_access_role_arn, InputDataConfig={ "S3Uri": f"s3://{input_bucket}/{input_key}", "InputFormat": input_format.value, }, OutputDataConfig={"S3Uri": f"s3://{output_bucket}/{output_key}"}, ) logger.info("Started topic modeling job %s.", response["JobId"]) except ClientError: logger.exception("Couldn't start topic modeling job.") raise else: return response