ToxicContent - Amazon Comprehend API Reference

ToxicContent

Toxic content analysis result for one string. For more information about toxicity detection, see Toxicity detection in the Amazon Comprehend Developer Guide

Contents

Name

The name of the toxic content type.

Type: String

Valid Values: GRAPHIC | HARASSMENT_OR_ABUSE | HATE_SPEECH | INSULT | PROFANITY | SEXUAL | VIOLENCE_OR_THREAT

Required: No

Score

Model confidence in the detected content type. Value range is zero to one, where one is highest confidence.

Type: Float

Required: No

See Also

For more information about using this API in one of the language-specific AWS SDKs, see the following: