Amazon Rekognition
Developer Guide


Detects explicit or suggestive adult content in a specified .jpeg or .png image. Use DetectModerationLabels to moderate images depending on your requirements. For example, you might want to filter images that contain nudity, but not images containing suggestive content.

To filter images, use the labels returned by DetectModerationLabels to determine which types of content are appropriate. For information about moderation labels, see Moderating Images.

Request Syntax

{ "Image": { "Bytes": blob, "S3Object": { "Bucket": "string", "Name": "string", "Version": "string" } }, "MinConfidence": number }

Request Parameters

The request accepts the following data in JSON format.


Provides the source image either as bytes or an S3 object.

The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

You may need to Base64-encode the image bytes depending on the language you are using and whether or not you are using the AWS SDK. For more information, see Example 4: Supplying Image Bytes to Amazon Rekognition Operations.

If you use the Amazon CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.

For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource-Based Policies.

Type: Image object

Required: Yes


Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with a confidence level lower than this specified value.

If you don't specify MinConfidence, the operation returns labels with confidence values greater than or equal to 50 percent.

Type: Float

Valid Range: Minimum value of 0. Maximum value of 100.

Required: No

Response Syntax

{ "ModerationLabels": [ { "Confidence": number, "Name": "string", "ParentName": "string" } ] }

Response Elements

If the action is successful, the service sends back an HTTP 200 response.

The following data is returned in JSON format by the service.


A list of labels for explicit or suggestive adult content found in the image. The list includes the top-level label and each child label detected in the image. This is useful for filtering specific categories of content.

Type: array of ModerationLabel objects



You are not authorized to perform the action.

HTTP Status Code: 400


The input image size exceeds the allowed limit. For more information, see Limits in Amazon Rekognition.

HTTP Status Code: 400


Amazon Rekognition experienced a service issue. Try your call again.

HTTP Status Code: 500


The provided image format is not supported.

HTTP Status Code: 400


Input parameter violated a constraint. Validate your parameter before calling the API operation again.

HTTP Status Code: 400


Amazon Rekognition is unable to access the S3 object specified in the request.

HTTP Status Code: 400


The number of requests exceeded your throughput limit. If you want to increase this limit, contact Amazon Rekognition.

HTTP Status Code: 400


Amazon Rekognition is temporarily unable to process the request. Try your call again.

HTTP Status Code: 500


Example Response

The following example shows the response of a call to DetectmoderationLabels.

Sample Response

{ "ModerationLabels": [ { "Confidence": 79.03318786621094, "ParentName": "", "Name": "Explicit Nudity" }, { "Confidence": 79.03318786621094, "ParentName": "Explicit Nudity", "Name": "Graphic Male Nudity" }, { "Confidence": 68.99967956542969, "ParentName": "Explicit Nudity", "Name": "Sexual Activity" } ] }

See Also

For more information about using this API in one of the language-specific AWS SDKs, see the following: