Class: Aws::Rekognition::Types::ModerationLabel

Inherits:
Struct
  • Object
show all
Defined in:
gems/aws-sdk-rekognition/lib/aws-sdk-rekognition/types.rb

Overview

Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide.

Constant Summary collapse

SENSITIVE =
[]

Instance Attribute Summary collapse

Instance Attribute Details

#confidenceFloat

Specifies the confidence that Amazon Rekognition has that the label has been correctly identified.

If you don't specify the MinConfidence parameter in the call to DetectModerationLabels, the operation returns labels with a confidence value greater than or equal to 50 percent.

Returns:

  • (Float)


4235
4236
4237
4238
4239
4240
4241
# File 'gems/aws-sdk-rekognition/lib/aws-sdk-rekognition/types.rb', line 4235

class ModerationLabel < Struct.new(
  :confidence,
  :name,
  :parent_name)
  SENSITIVE = []
  include Aws::Structure
end

#nameString

The label name for the type of unsafe content detected in the image.

Returns:

  • (String)


4235
4236
4237
4238
4239
4240
4241
# File 'gems/aws-sdk-rekognition/lib/aws-sdk-rekognition/types.rb', line 4235

class ModerationLabel < Struct.new(
  :confidence,
  :name,
  :parent_name)
  SENSITIVE = []
  include Aws::Structure
end

#parent_nameString

The name for the parent label. Labels at the top level of the hierarchy have the parent label "".

Returns:

  • (String)


4235
4236
4237
4238
4239
4240
4241
# File 'gems/aws-sdk-rekognition/lib/aws-sdk-rekognition/types.rb', line 4235

class ModerationLabel < Struct.new(
  :confidence,
  :name,
  :parent_name)
  SENSITIVE = []
  include Aws::Structure
end