Table Of Contents

Feedback

User Guide

First time using the AWS CLI? See the User Guide for help getting started.

Note: You are viewing the documentation for an older major version of the AWS CLI (version 1).

AWS CLI version 2, the latest major version of AWS CLI, is now stable and recommended for general use. To view this page for the AWS CLI version 2, click here. For more information see the AWS CLI version 2 installation instructions and migration guide.

[ aws . sagemaker ]

describe-model-bias-job-definition

Description

Returns a description of a model bias job definition.

See also: AWS API Documentation

See 'aws help' for descriptions of global parameters.

Synopsis

  describe-model-bias-job-definition
--job-definition-name <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]

Options

--job-definition-name (string)

The name of the model bias job definition. The name must be unique within an Amazon Web Services Region in the Amazon Web Services account.

--cli-input-json (string) Performs service operation based on the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally.

--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command.

See 'aws help' for descriptions of global parameters.

Output

JobDefinitionArn -> (string)

The Amazon Resource Name (ARN) of the model bias job.

JobDefinitionName -> (string)

The name of the bias job definition. The name must be unique within an Amazon Web Services Region in the Amazon Web Services account.

CreationTime -> (timestamp)

The time at which the model bias job was created.

ModelBiasBaselineConfig -> (structure)

The baseline configuration for a model bias job.

BaseliningJobName -> (string)

The name of the baseline model bias job.

ConstraintsResource -> (structure)

The constraints resource for a monitoring job.

S3Uri -> (string)

The Amazon S3 URI for the constraints resource.

ModelBiasAppSpecification -> (structure)

Configures the model bias job to run a specified Docker container image.

ImageUri -> (string)

The container image to be run by the model bias job.

ConfigUri -> (string)

JSON formatted S3 file that defines bias parameters. For more information on this JSON configuration file, see Configure bias parameters .

Environment -> (map)

Sets the environment variables in the Docker container.

key -> (string)

value -> (string)

ModelBiasJobInput -> (structure)

Inputs for the model bias job.

EndpointInput -> (structure)

Input object for the endpoint

EndpointName -> (string)

An endpoint in customer's account which has enabled DataCaptureConfig enabled.

LocalPath -> (string)

Path to the filesystem where the endpoint data is available to the container.

S3InputMode -> (string)

Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

S3DataDistributionType -> (string)

Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

FeaturesAttribute -> (string)

The attributes of the input data that are the input features.

InferenceAttribute -> (string)

The attribute of the input data that represents the ground truth label.

ProbabilityAttribute -> (string)

In a classification problem, the attribute that represents the class probability.

ProbabilityThresholdAttribute -> (double)

The threshold for the class probability to be evaluated as a positive result.

StartTimeOffset -> (string)

If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs .

EndTimeOffset -> (string)

If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs .

GroundTruthS3Input -> (structure)

Location of ground truth labels to use in model bias job.

S3Uri -> (string)

The address of the Amazon S3 location of the ground truth labels.

ModelBiasJobOutputConfig -> (structure)

The output configuration for monitoring jobs.

MonitoringOutputs -> (list)

Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

(structure)

The output object for a monitoring job.

S3Output -> (structure)

The Amazon S3 storage location where the results of a monitoring job are saved.

S3Uri -> (string)

A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

LocalPath -> (string)

The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

S3UploadMode -> (string)

Whether to upload the results of the monitoring job continuously or after the job completes.

KmsKeyId -> (string)

The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

JobResources -> (structure)

Identifies the resources to deploy for a monitoring job.

ClusterConfig -> (structure)

The configuration for the cluster resources used to run the processing job.

InstanceCount -> (integer)

The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

InstanceType -> (string)

The ML compute instance type for the processing job.

VolumeSizeInGB -> (integer)

The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

VolumeKmsKeyId -> (string)

The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

NetworkConfig -> (structure)

Networking options for a model bias job.

EnableInterContainerTrafficEncryption -> (boolean)

Whether to encrypt all communications between the instances used for the monitoring jobs. Choose True to encrypt communications. Encryption provides greater security for distributed jobs, but the processing might take longer.

EnableNetworkIsolation -> (boolean)

Whether to allow inbound and outbound network calls to and from the containers used for the monitoring job.

VpcConfig -> (structure)

Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud .

SecurityGroupIds -> (list)

The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

(string)

Subnets -> (list)

The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones .

(string)

RoleArn -> (string)

The Amazon Resource Name (ARN) of the Amazon Web Services Identity and Access Management (IAM) role that has read permission to the input data location and write permission to the output data location in Amazon S3.

StoppingCondition -> (structure)

A time limit for how long the monitoring job is allowed to run before stopping.

MaxRuntimeInSeconds -> (integer)

The maximum runtime allowed in seconds.

Note

The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.