interface LlmAsAJudgeOptions
| Language | Type name |
|---|---|
.NET | Amazon.CDK.AWS.Bedrock.Agentcore.Alpha.LlmAsAJudgeOptions |
Go | github.com/aws/aws-cdk-go/awsbedrockagentcorealpha/v2#LlmAsAJudgeOptions |
Java | software.amazon.awscdk.services.bedrock.agentcore.alpha.LlmAsAJudgeOptions |
Python | aws_cdk.aws_bedrock_agentcore_alpha.LlmAsAJudgeOptions |
TypeScript (source) | @aws-cdk/aws-bedrock-agentcore-alpha ยป LlmAsAJudgeOptions |
Options for configuring an LLM-as-a-Judge custom evaluator.
Uses a foundation model to assess agent performance based on custom instructions and a rating scale.
Example
// Create a custom LLM-as-a-Judge evaluator
const evaluator = new agentcore.Evaluator(this, 'MyEvaluator', {
evaluatorName: 'my_custom_evaluator',
level: agentcore.EvaluationLevel.SESSION,
evaluatorConfig: agentcore.EvaluatorConfig.llmAsAJudge({
instructions: 'Evaluate whether the agent response is helpful and accurate.',
modelId: 'us.anthropic.claude-sonnet-4-6',
ratingScale: agentcore.EvaluatorRatingScale.categorical([
{ label: 'Good', definition: 'The response is helpful and accurate.' },
{ label: 'Bad', definition: 'The response is not helpful or contains errors.' },
]),
}),
});
// Use the custom evaluator in an online evaluation configuration
new agentcore.OnlineEvaluationConfig(this, 'MyEvaluation', {
onlineEvaluationConfigName: 'my_evaluation',
evaluators: [
agentcore.EvaluatorReference.builtin(agentcore.BuiltinEvaluator.HELPFULNESS),
agentcore.EvaluatorReference.custom(evaluator),
],
dataSource: agentcore.DataSourceConfig.fromCloudWatchLogs({
logGroupNames: ['/aws/bedrock-agentcore/my-agent'],
serviceNames: ['my-agent.default'],
}),
});
Properties
| Name | Type | Description |
|---|---|---|
| instructions | string | The evaluation instructions that guide the language model in assessing agent performance. |
| model | string | The identifier of the Amazon Bedrock model to use for evaluation. |
| rating | Evaluator | The rating scale that defines how the evaluator should score agent performance. |
| additional | { [string]: any } | Additional model-specific request fields. |
| inference | Evaluator | Optional inference configuration parameters that control model behavior during evaluation. |
instructions
Type:
string
The evaluation instructions that guide the language model in assessing agent performance.
These instructions define the evaluation criteria, context, and expected behavior.
Instructions must contain placeholders appropriate for the evaluation level
(e.g., {context}, {available_tools} for SESSION level).
Note: Evaluators using reference-input placeholders (e.g., {expected_tool_trajectory},
{assertions}, {expected_response}) are only compatible with on-demand evaluation,
not online evaluation.
See also: https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/custom-evaluators.html
modelId
Type:
string
The identifier of the Amazon Bedrock model to use for evaluation.
Accepts standard model IDs (e.g., 'anthropic.claude-sonnet-4-6')
and cross-region inference profile IDs with region prefixes
(e.g., 'us.anthropic.claude-sonnet-4-6', 'eu.anthropic.claude-sonnet-4-6').
ratingScale
Type:
Evaluator
The rating scale that defines how the evaluator should score agent performance.
additionalModelRequestFields?
Type:
{ [string]: any }
(optional, default: No additional fields)
Additional model-specific request fields.
inferenceConfig?
Type:
Evaluator
(optional, default: The foundation model's default inference parameters are used)
Optional inference configuration parameters that control model behavior during evaluation.
When not specified, the foundation model uses its own default values for maxTokens, temperature, and topP.
See also: https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/custom-evaluators.html

.NET
Go
Java
Python
TypeScript (