AWS SDK Version 3 for .NET
API Reference

AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.

Container for the parameters to the DetectLabels operation. Detects instances of real-world entities within an image (JPEG or PNG) provided as input. This includes objects like flower, tree, and table; events like wedding, graduation, and birthday party; and concepts like landscape, evening, and nature. For an example, see images-s3.

DetectLabels does not support the detection of activities. However, activity detection is supported for label detection in videos. For more information, see .

You pass the input image as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the Amazon CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.

For each object, scene, and concept the API returns one or more labels. Each label provides the object name, and the level of confidence that the image contains the object. For example, suppose the input image has a lighthouse, the sea, and a rock. The response will include all three labels, one for each object.

{Name: lighthouse, Confidence: 98.4629}

{Name: rock,Confidence: 79.2097}

{Name: sea,Confidence: 75.061}

In the preceding example, the operation returns one label for each of the three objects. The operation can also return multiple labels for the same object in the image. For example, if the input image shows a flower (for example, a tulip), the operation might return the following three labels.

{Name: flower,Confidence: 99.0562}

{Name: plant,Confidence: 99.0562}

{Name: tulip,Confidence: 99.0562}

In this example, the detection algorithm more precisely identifies the flower as a tulip.

In response, the API returns an array of labels. In addition, the response also includes the orientation correction. Optionally, you can specify MinConfidence to control the confidence threshold for the labels returned. The default is 50%. You can also add the MaxLabels parameter to limit the number of labels returned.

If the object detected is a person, the operation doesn't provide the same facial details that the DetectFaces operation provides.

This is a stateless API operation. That is, the operation does not persist any data.

This operation requires permissions to perform the rekognition:DetectLabels action.

Inheritance Hierarchy


Namespace: Amazon.Rekognition.Model
Assembly: AWSSDK.Rekognition.dll
Version: 3.x.y.z


public class DetectLabelsRequest : AmazonRekognitionRequest

The DetectLabelsRequest type exposes the following members


Public Method DetectLabelsRequest()


Public Property Image Amazon.Rekognition.Model.Image

Gets and sets the property Image.

The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

Public Property MaxLabels System.Int32

Gets and sets the property MaxLabels.

Maximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels.

Public Property MinConfidence System.Single

Gets and sets the property MinConfidence.

Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with confidence lower than this specified value.

If MinConfidence is not specified, the operation returns labels with a confidence values greater than or equal to 50 percent.


This operation detects labels in the supplied image

To detect labels

var response = client.DetectLabels(new DetectLabelsRequest 
    Image = new Image { S3Object = new S3Object {
        Bucket = "mybucket",
        Name = "myphoto"
    } },
    MaxLabels = 123,
    MinConfidence = 70


Version Information

.NET Standard:
Supported in: 1.3

.NET Framework:
Supported in: 4.5, 4.0, 3.5

Portable Class Library:
Supported in: Windows Store Apps
Supported in: Windows Phone 8.1
Supported in: Xamarin Android
Supported in: Xamarin iOS (Unified)
Supported in: Xamarin.Forms