Inference parameters for foundation models - Amazon Bedrock

Inference parameters for foundation models

This section documents the inference parameters that you can use with the base models that Amazon Bedrock provides.

Optionally, set inference parameters to influence the response generated by the model. You set inference parameters in a playground in the console, or in the body field of the InvokeModel or InvokeModelWithResponseStream API.

When you call a model, you also include a prompt for the model. For information about writing prompts, see Prompt engineering guidelines.

The following sections define the inference parameters available for each base model. For a custom model, use the same inference parameters as the base model from which it was customized.