テキスト生成のために Amazon Bedrock で Anthropic Claude 2.x を呼び出す - AWS SDK コード例

Doc AWS SDK Examples リポジトリには、他にも SDK の例があります。 AWS GitHub

翻訳は機械翻訳により提供されています。提供された翻訳内容と英語版の間で齟齬、不一致または矛盾がある場合、英語版が優先します。

テキスト生成のために Amazon Bedrock で Anthropic Claude 2.x を呼び出す

次のコード例は、Amazon Bedrock で Anthropic Claude 2.x を呼び出してテキストを生成する方法を示しています。

.NET
AWS SDK for .NET
注記

の詳細については、「」を参照してください GitHub。AWS コード例リポジトリ で全く同じ例を見つけて、設定と実行の方法を確認してください。

Anthropic Claude 2 基盤モデルを非同期で呼び出してテキストを生成します。

/// <summary> /// Asynchronously invokes the Anthropic Claude 2 model to run an inference based on the provided input. /// </summary> /// <param name="prompt">The prompt that you want Claude to complete.</param> /// <returns>The inference response from the model</returns> /// <remarks> /// The different model providers have individual request and response formats. /// For the format, ranges, and default values for Anthropic Claude, refer to: /// https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html /// </remarks> public static async Task<string> InvokeClaudeAsync(string prompt) { string claudeModelId = "anthropic.claude-v2"; // Claude requires you to enclose the prompt as follows: string enclosedPrompt = "Human: " + prompt + "\n\nAssistant:"; AmazonBedrockRuntimeClient client = new(RegionEndpoint.USEast1); string payload = new JsonObject() { { "prompt", enclosedPrompt }, { "max_tokens_to_sample", 200 }, { "temperature", 0.5 }, { "stop_sequences", new JsonArray("\n\nHuman:") } }.ToJsonString(); string generatedText = ""; try { InvokeModelResponse response = await client.InvokeModelAsync(new InvokeModelRequest() { ModelId = claudeModelId, Body = AWSSDKUtils.GenerateMemoryStreamFromString(payload), ContentType = "application/json", Accept = "application/json" }); if (response.HttpStatusCode == System.Net.HttpStatusCode.OK) { return JsonNode.ParseAsync(response.Body).Result?["completion"]?.GetValue<string>() ?? ""; } else { Console.WriteLine("InvokeModelAsync failed with status code " + response.HttpStatusCode); } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine(e.Message); } return generatedText; }
  • API の詳細については、「 API リファレンスInvokeModel」の「」を参照してください。 AWS SDK for .NET

Go
SDK for Go V2
注記

の詳細については、「」を参照してください GitHub。AWS コード例リポジトリ で全く同じ例を見つけて、設定と実行の方法を確認してください。

Anthropic Claude 2 基盤モデルを呼び出して、テキストを生成します。

// Each model provider has their own individual request and response formats. // For the format, ranges, and default values for Anthropic Claude, refer to: // https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html type ClaudeRequest struct { Prompt string `json:"prompt"` MaxTokensToSample int `json:"max_tokens_to_sample"` Temperature float64 `json:"temperature,omitempty"` StopSequences []string `json:"stop_sequences,omitempty"` } type ClaudeResponse struct { Completion string `json:"completion"` } // Invokes Anthropic Claude on Amazon Bedrock to run an inference using the input // provided in the request body. func (wrapper InvokeModelWrapper) InvokeClaude(prompt string) (string, error) { modelId := "anthropic.claude-v2" // Anthropic Claude requires enclosing the prompt as follows: enclosedPrompt := "Human: " + prompt + "\n\nAssistant:" body, err := json.Marshal(ClaudeRequest{ Prompt: enclosedPrompt, MaxTokensToSample: 200, Temperature: 0.5, StopSequences: []string{"\n\nHuman:"}, }) if err != nil { log.Fatal("failed to marshal", err) } output, err := wrapper.BedrockRuntimeClient.InvokeModel(context.TODO(), &bedrockruntime.InvokeModelInput{ ModelId: aws.String(modelId), ContentType: aws.String("application/json"), Body: body, }) if err != nil { ProcessError(err, modelId) } var response ClaudeResponse if err := json.Unmarshal(output.Body, &response); err != nil { log.Fatal("failed to unmarshal", err) } return response.Completion, nil }
  • API の詳細については、「 API リファレンスInvokeModel」の「」を参照してください。 AWS SDK for Go

Java
SDK for Java 2.x
注記

の詳細については、「」を参照してください GitHub。AWS コード例リポジトリ で全く同じ例を見つけて、設定と実行の方法を確認してください。

同期クライアントを使用して Claude 2.x を呼び出します (非同期の例についてはスクロールダウンします)。

/** * Invokes the Anthropic Claude 2 model to run an inference based on the * provided input. * * @param prompt The prompt for Claude to complete. * @return The generated response. */ public static String invokeClaude(String prompt) { /* * The different model providers have individual request and response formats. * For the format, ranges, and default values for Anthropic Claude, refer to: * https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html */ String claudeModelId = "anthropic.claude-v2"; // Claude requires you to enclose the prompt as follows: String enclosedPrompt = "Human: " + prompt + "\n\nAssistant:"; BedrockRuntimeClient client = BedrockRuntimeClient.builder() .region(Region.US_EAST_1) .credentialsProvider(ProfileCredentialsProvider.create()) .build(); String payload = new JSONObject() .put("prompt", enclosedPrompt) .put("max_tokens_to_sample", 200) .put("temperature", 0.5) .put("stop_sequences", List.of("\n\nHuman:")) .toString(); InvokeModelRequest request = InvokeModelRequest.builder() .body(SdkBytes.fromUtf8String(payload)) .modelId(claudeModelId) .contentType("application/json") .accept("application/json") .build(); InvokeModelResponse response = client.invokeModel(request); JSONObject responseBody = new JSONObject(response.body().asUtf8String()); String generatedText = responseBody.getString("completion"); return generatedText; }

非同期クライアントを使用して Claude 2.x を呼び出します。

/** * Asynchronously invokes the Anthropic Claude 2 model to run an inference based * on the provided input. * * @param prompt The prompt that you want Claude to complete. * @return The inference response from the model. */ public static String invokeClaude(String prompt) { /* * The different model providers have individual request and response formats. * For the format, ranges, and default values for Anthropic Claude, refer to: * https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html */ String claudeModelId = "anthropic.claude-v2"; // Claude requires you to enclose the prompt as follows: String enclosedPrompt = "Human: " + prompt + "\n\nAssistant:"; BedrockRuntimeAsyncClient client = BedrockRuntimeAsyncClient.builder() .region(Region.US_EAST_1) .credentialsProvider(ProfileCredentialsProvider.create()) .build(); String payload = new JSONObject() .put("prompt", enclosedPrompt) .put("max_tokens_to_sample", 200) .put("temperature", 0.5) .put("stop_sequences", List.of("\n\nHuman:")) .toString(); InvokeModelRequest request = InvokeModelRequest.builder() .body(SdkBytes.fromUtf8String(payload)) .modelId(claudeModelId) .contentType("application/json") .accept("application/json") .build(); CompletableFuture<InvokeModelResponse> completableFuture = client.invokeModel(request) .whenComplete((response, exception) -> { if (exception != null) { System.out.println("Model invocation failed: " + exception); } }); String generatedText = ""; try { InvokeModelResponse response = completableFuture.get(); JSONObject responseBody = new JSONObject(response.body().asUtf8String()); generatedText = responseBody.getString("completion"); } catch (InterruptedException e) { Thread.currentThread().interrupt(); System.err.println(e.getMessage()); } catch (ExecutionException e) { System.err.println(e.getMessage()); } return generatedText; }
  • API の詳細については、「 API リファレンスInvokeModel」の「」を参照してください。 AWS SDK for Java 2.x

JavaScript
SDK for JavaScript (v3)
注記

の詳細については、「」を参照してください GitHub。AWS コード例リポジトリ で全く同じ例を見つけて、設定と実行の方法を確認してください。

Anthropic Claude 2 基盤モデルを呼び出して、テキストを生成します。

// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { fileURLToPath } from "url"; import { FoundationModels } from "../../config/foundation_models.js"; import { BedrockRuntimeClient, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; /** * @typedef {Object} ResponseContent * @property {string} text */ /** * @typedef {Object} MessagesResponseBody * @property {ResponseContent[]} content */ /** * @typedef {Object} TextCompletionsResponseBody * @property {completion} text */ /** * Invokes Anthropic Claude 2.x using the Messages API. * * To learn more about the Anthropic Messages API, go to: * https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-anthropic-claude-messages.html * * @param {string} prompt - The input text prompt for the model to complete. * @param {string} [modelId] - The ID of the model to use. Defaults to "anthropic.claude-v2". */ export const invokeModel = async (prompt, modelId = "anthropic.claude-v2") => { // Create a new Bedrock Runtime client instance. const client = new BedrockRuntimeClient({ region: "us-east-1" }); // Prepare the payload for the Messages API request. const payload = { anthropic_version: "bedrock-2023-05-31", max_tokens: 1000, messages: [ { role: "user", content: [{ type: "text", text: prompt }], }, ], }; // Invoke Claude with the payload and wait for the response. const command = new InvokeModelCommand({ contentType: "application/json", body: JSON.stringify(payload), modelId, }); const apiResponse = await client.send(command); // Decode and return the response(s) const decodedResponseBody = new TextDecoder().decode(apiResponse.body); /** @type {MessagesResponseBody} */ const responseBody = JSON.parse(decodedResponseBody); return responseBody.content[0].text; }; /** * Invokes Anthropic Claude 2.x using the Text Completions API. * * To learn more about the Anthropic Text Completions API, go to: * https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-anthropic-claude-text-completion.html * * @param {string} prompt - The input text prompt for the model to complete. * @param {string} [modelId] - The ID of the model to use. Defaults to "anthropic.claude-v2". */ export const invokeTextCompletionsApi = async ( prompt, modelId = "anthropic.claude-v2", ) => { // Create a new Bedrock Runtime client instance. const client = new BedrockRuntimeClient({ region: "us-east-1" }); // Prepare the payload for the Text Completions API, using the required prompt template. const enclosedPrompt = `Human: ${prompt}\n\nAssistant:`; const payload = { prompt: enclosedPrompt, max_tokens_to_sample: 500, temperature: 0.5, stop_sequences: ["\n\nHuman:"], }; // Invoke Claude with the payload and wait for the response. const command = new InvokeModelCommand({ contentType: "application/json", body: JSON.stringify(payload), modelId, }); const apiResponse = await client.send(command); // Decode and return the response. const decoded = new TextDecoder().decode(apiResponse.body); /** @type {TextCompletionsResponseBody} */ const responseBody = JSON.parse(decoded); return responseBody.completion; }; // Invoke the function if this file was run directly. if (process.argv[1] === fileURLToPath(import.meta.url)) { const prompt = 'Complete the following in one sentence: "Once upon a time..."'; const modelId = FoundationModels.CLAUDE_2.modelId; console.log(`Prompt: ${prompt}`); console.log(`Model ID: ${modelId}`); try { console.log("-".repeat(53)); console.log("Using the Messages API:"); const response = await invokeModel(prompt, modelId); console.log(response); } catch (err) { console.log(err); } try { console.log("-".repeat(53)); console.log("Using the Text Completions API:"); const response = await invokeTextCompletionsApi(prompt, modelId); console.log(response); } catch (err) { console.log(err); } }
  • API の詳細については、「 API リファレンスInvokeModel」の「」を参照してください。 AWS SDK for JavaScript

PHP
SDK for PHP
注記

の詳細については、「」を参照してください GitHub。AWS コード例リポジトリ で全く同じ例を見つけて、設定と実行の方法を確認してください。

Anthropic Claude 2 基盤モデルを呼び出して、テキストを生成します。

public function invokeClaude($prompt) { # The different model providers have individual request and response formats. # For the format, ranges, and default values for Anthropic Claude, refer to: # https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html $completion = ""; try { $modelId = 'anthropic.claude-v2'; # Claude requires you to enclose the prompt as follows: $prompt = "\n\nHuman: {$prompt}\n\nAssistant:"; $body = [ 'prompt' => $prompt, 'max_tokens_to_sample' => 200, 'temperature' => 0.5, 'stop_sequences' => ["\n\nHuman:"], ]; $result = $this->bedrockRuntimeClient->invokeModel([ 'contentType' => 'application/json', 'body' => json_encode($body), 'modelId' => $modelId, ]); $response_body = json_decode($result['body']); $completion = $response_body->completion; } catch (Exception $e) { echo "Error: ({$e->getCode()}) - {$e->getMessage()}\n"; } return $completion; }
  • API の詳細については、「 API リファレンスInvokeModel」の「」を参照してください。 AWS SDK for PHP

Python
SDK for Python (Boto3)
注記

の詳細については、「」を参照してください GitHub。AWS コード例リポジトリ で全く同じ例を見つけて、設定と実行の方法を確認してください。

Anthropic Claude 2 基盤モデルを呼び出して、テキストを生成します。

def invoke_claude(self, prompt): """ Invokes the Anthropic Claude 2 model to run an inference using the input provided in the request body. :param prompt: The prompt that you want Claude to complete. :return: Inference response from the model. """ try: # The different model providers have individual request and response formats. # For the format, ranges, and default values for Anthropic Claude, refer to: # https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html # Claude requires you to enclose the prompt as follows: enclosed_prompt = "Human: " + prompt + "\n\nAssistant:" body = { "prompt": enclosed_prompt, "max_tokens_to_sample": 200, "temperature": 0.5, "stop_sequences": ["\n\nHuman:"], } response = self.bedrock_runtime_client.invoke_model( modelId="anthropic.claude-v2", body=json.dumps(body) ) response_body = json.loads(response["body"].read()) completion = response_body["completion"] return completion except ClientError: logger.error("Couldn't invoke Anthropic Claude") raise
  • API の詳細については、InvokeModelAWS 「 SDK for Python (Boto3) API リファレンス」の「」を参照してください。

SAP ABAP
SDK for SAP ABAP
注記

の詳細については、「」を参照してください GitHub。AWS コード例リポジトリ で全く同じ例を見つけて、設定と実行の方法を確認してください。

Anthropic Claude 2 基盤モデルを呼び出して、テキストを生成します。この例では、一部の NetWeaver バージョンでは利用できない /US2/CL_JSON の機能を使用しています。

"Claude V2 Input Parameters should be in a format like this: * { * "prompt":"\n\nHuman:\\nTell me a joke\n\nAssistant:\n", * "max_tokens_to_sample":2048, * "temperature":0.5, * "top_k":250, * "top_p":1.0, * "stop_sequences":[] * } DATA: BEGIN OF ls_input, prompt TYPE string, max_tokens_to_sample TYPE /aws1/rt_shape_integer, temperature TYPE /aws1/rt_shape_float, top_k TYPE /aws1/rt_shape_integer, top_p TYPE /aws1/rt_shape_float, stop_sequences TYPE /aws1/rt_stringtab, END OF ls_input. "Leave ls_input-stop_sequences empty. ls_input-prompt = |\n\nHuman:\\n{ iv_prompt }\n\nAssistant:\n|. ls_input-max_tokens_to_sample = 2048. ls_input-temperature = '0.5'. ls_input-top_k = 250. ls_input-top_p = 1. "Serialize into JSON with /ui2/cl_json -- this assumes SAP_UI is installed. DATA(lv_json) = /ui2/cl_json=>serialize( data = ls_input pretty_name = /ui2/cl_json=>pretty_mode-low_case ). TRY. DATA(lo_response) = lo_bdr->invokemodel( iv_body = /aws1/cl_rt_util=>string_to_xstring( lv_json ) iv_modelid = 'anthropic.claude-v2' iv_accept = 'application/json' iv_contenttype = 'application/json' ). "Claude V2 Response format will be: * { * "completion": "Knock Knock...", * "stop_reason": "stop_sequence" * } DATA: BEGIN OF ls_response, completion TYPE string, stop_reason TYPE string, END OF ls_response. /ui2/cl_json=>deserialize( EXPORTING jsonx = lo_response->get_body( ) pretty_name = /ui2/cl_json=>pretty_mode-camel_case CHANGING data = ls_response ). DATA(lv_answer) = ls_response-completion. CATCH /aws1/cx_bdraccessdeniedex INTO DATA(lo_ex). WRITE / lo_ex->get_text( ). WRITE / |Don't forget to enable model access at https://console.aws.amazon.com/bedrock/home?#/modelaccess|. ENDTRY.

Anthropic Claude 2 基盤モデルを呼び出して、L2 高レベルクライアントを使用してテキストを生成します。

TRY. DATA(lo_bdr_l2_claude) = /aws1/cl_bdr_l2_factory=>create_claude_2( lo_bdr ). " iv_prompt can contain a prompt like 'tell me a joke about Java programmers'. DATA(lv_answer) = lo_bdr_l2_claude->prompt_for_text( iv_prompt ). CATCH /aws1/cx_bdraccessdeniedex INTO DATA(lo_ex). WRITE / lo_ex->get_text( ). WRITE / |Don't forget to enable model access at https://console.aws.amazon.com/bedrock/home?#/modelaccess|. ENDTRY.
  • API の詳細については、InvokeModelAWS 「 SDK for SAP ABAP API リファレンス」の「」を参照してください。