Amazon Rekognition
Developer Guide

Non-Storage API Operations

Amazon Rekognition provides the following non-storage API operations:

  • DetectLabels to detect labels. This includes objects (for example, a flower, tree, or table), events (for example, a wedding, graduation, or debate), and concepts (for example, a landscape, adventure, or musical).

  • DetectFaces to detect faces.

  • CompareFaces to compare faces in images.

  • DetectModerationLabels to detect explicit or suggestive adult content in images.

  • RecognizeCelebrities to recognize celebrities in images.

  • GetCelebrityInfo to get information about a celebrity.

These are referred to as non-storage API operations because when you make the operation call, Amazon Rekognition does not persist any information discovered about the input image. Like all other Amazon Rekognition API operations, no input image bytes are persisted by non-storage API operations.

The following example scenarios show where you might integrate non-storage API operations in your application. These scenarios assume that you have a local repository of images.

Example 1: An application that finds images in your local repository that contain specific labels

First, you detect labels using the Amazon Rekognition DetectLabels operation in each of the images in your repository and build a client-side index, as shown following:

Label ImageID tree image-1 flower image-1 mountain image-1 tulip image-2 flower image-2 apple image-3

Then, your application can search this index to find images in your local repository that contain a specific label. For example, display images that contain a tree.

Each label that Amazon Rekognition detects has a confidence value associated. It indicates the level of confidence that the input image contains that label. You can use this confidence value to optionally perform additional client-side filtering on labels depending on your application requirements about the level of confidence in the detection. For example, if you require precise labels, you might filter and choose only the labels with higher confidence (such as 95% or higher). If your application doesn't require higher confidence value, you might choose to filter labels with lower confidence value (closer to 50%).

Example 2: An application to display enhanced face images

First, you can detect faces in each of the images in your local repository using the Amazon Rekognition DetectFaces operation and build a client-side index. For each face, the operation returns metadata that includes a bounding box, facial landmarks (for example, the position of mouth and ear), and facial attributes (for example, gender). You can store this metadata in a client-side local index, as shown following:

ImageID FaceID FaceMetaData image-1 face-1 <boundingbox>, etc. image-1 face-2 <boundingbox>, etc. image-1 face-3 <boundingbox>, etc. ...

In this index, the primary key is a combination of both the ImageID and FaceID.

Then, you can use the information in the index to enhance the images when your application displays them from your local repository. For example, you might add a bounding box around the face or highlight facial features.