使用API帶有響應流的調用模型在 Amazon 基岩上調用 Meta Lama 3 - Amazon Bedrock

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

使用API帶有響應流的調用模型在 Amazon 基岩上調用 Meta Lama 3

下面的代碼示例演示了如何發送文本消息到 Meta Lama 3,使用調用模型API,並打印響應流。

.NET
AWS SDK for .NET
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用調用模型API發送文本消息並實時處理響應流。

// Use the native inference API to send a text message to Meta Llama 3 // and print the response stream. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Llama 3 8b Instruct. var modelId = "meta.llama3-8b-instruct-v1:0"; // Define the prompt for the model. var prompt = "Describe the purpose of a 'hello world' program in one line."; // Embed the prompt in Llama 2's instruction format. var formattedPrompt = $@" <|begin_of_text|> <|start_header_id|>user<|end_header_id|> {prompt} <|eot_id|> <|start_header_id|>assistant<|end_header_id|> "; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { prompt = formattedPrompt, max_gen_len = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelWithResponseStreamRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var streamingResponse = await client.InvokeModelWithResponseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var item in streamingResponse.Body) { var chunk = JsonSerializer.Deserialize<JsonObject>((item as PayloadPart).Bytes); var text = chunk["generation"] ?? ""; Console.Write(text); } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
Java
SDK對於爪哇 2.x
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用調用模型API發送文本消息並實時處理響應流。

// Use the native inference API to send a text message to Meta Llama 3 // and print the response stream. import org.json.JSONObject; import org.json.JSONPointer; import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider; import software.amazon.awssdk.core.SdkBytes; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.bedrockruntime.BedrockRuntimeAsyncClient; import software.amazon.awssdk.services.bedrockruntime.model.InvokeModelWithResponseStreamRequest; import software.amazon.awssdk.services.bedrockruntime.model.InvokeModelWithResponseStreamResponseHandler; import java.util.concurrent.ExecutionException; import static software.amazon.awssdk.services.bedrockruntime.model.InvokeModelWithResponseStreamResponseHandler.Visitor; public class Llama3_InvokeModelWithResponseStream { public static String invokeModelWithResponseStream() throws ExecutionException, InterruptedException { // Create a Bedrock Runtime client in the AWS Region you want to use. // Replace the DefaultCredentialsProvider with your preferred credentials provider. var client = BedrockRuntimeAsyncClient.builder() .credentialsProvider(DefaultCredentialsProvider.create()) .region(Region.US_EAST_1) .build(); // Set the model ID, e.g., Llama 3 8b Instruct. var modelId = "meta.llama3-8b-instruct-v1:0"; // The InvokeModelWithResponseStream API uses the model's native payload. // Learn more about the available inference parameters and response fields at: // https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-meta.html var nativeRequestTemplate = "{ \"prompt\": \"{{instruction}}\" }"; // Define the prompt for the model. var prompt = "Describe the purpose of a 'hello world' program in one line."; // Embed the prompt in Llama 3's instruction format. var instruction = ( "<|begin_of_text|>\\n" + "<|start_header_id|>user<|end_header_id|>\\n" + "{{prompt}} <|eot_id|>\\n" + "<|start_header_id|>assistant<|end_header_id|>\\n" ).replace("{{prompt}}", prompt); // Embed the instruction in the the native request payload. var nativeRequest = nativeRequestTemplate.replace("{{instruction}}", instruction); // Create a request with the model ID and the model's native request payload. var request = InvokeModelWithResponseStreamRequest.builder() .body(SdkBytes.fromUtf8String(nativeRequest)) .modelId(modelId) .build(); // Prepare a buffer to accumulate the generated response text. var completeResponseTextBuffer = new StringBuilder(); // Prepare a handler to extract, accumulate, and print the response text in real-time. var responseStreamHandler = InvokeModelWithResponseStreamResponseHandler.builder() .subscriber(Visitor.builder().onChunk(chunk -> { // Extract and print the text from the model's native response. var response = new JSONObject(chunk.bytes().asUtf8String()); var text = new JSONPointer("/generation").queryFrom(response); System.out.print(text); // Append the text to the response text buffer. completeResponseTextBuffer.append(text); }).build()).build(); try { // Send the request and wait for the handler to process the response. client.invokeModelWithResponseStream(request, responseStreamHandler).get(); // Return the complete response text. return completeResponseTextBuffer.toString(); } catch (ExecutionException | InterruptedException e) { System.err.printf("Can't invoke '%s': %s", modelId, e.getCause().getMessage()); throw new RuntimeException(e); } } public static void main(String[] args) throws ExecutionException, InterruptedException { invokeModelWithResponseStream(); } }
JavaScript
SDK對於 JavaScript (3)
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用調用模型API發送文本消息並實時處理響應流。

// Send a prompt to Meta Llama 3 and print the response stream in real-time. import { BedrockRuntimeClient, InvokeModelWithResponseStreamCommand, } from "@aws-sdk/client-bedrock-runtime"; // Create a Bedrock Runtime client in the AWS Region of your choice. const client = new BedrockRuntimeClient({ region: "us-west-2" }); // Set the model ID, e.g., Llama 3 8B Instruct. const modelId = "meta.llama3-8b-instruct-v1:0"; // Define the user message to send. const userMessage = "Describe the purpose of a 'hello world' program in one sentence."; // Embed the message in Llama 3's prompt format. const prompt = ` <|begin_of_text|> <|start_header_id|>user<|end_header_id|> ${userMessage} <|eot_id|> <|start_header_id|>assistant<|end_header_id|> `; // Format the request payload using the model's native structure. const request = { prompt, // Optional inference parameters: max_gen_len: 512, temperature: 0.5, top_p: 0.9, }; // Encode and send the request. const responseStream = await client.send( new InvokeModelWithResponseStreamCommand({ contentType: "application/json", body: JSON.stringify(request), modelId, }), ); // Extract and print the response stream in real-time. for await (const event of responseStream.body) { /** @type {{ generation: string }} */ const chunk = JSON.parse(new TextDecoder().decode(event.chunk.bytes)); if (chunk.generation) { process.stdout.write(chunk.generation); } } // Learn more about the Llama 3 prompt format at: // https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/#special-tokens-used-with-meta-llama-3
Python
SDK對於 Python(肉毒桿菌 3)
注意

還有更多關於 GitHub。尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫中設定和執行。

使用調用模型API發送文本消息並實時處理響應流。

# Use the native inference API to send a text message to Meta Llama 3 # and print the response stream. import boto3 import json from botocore.exceptions import ClientError # Create a Bedrock Runtime client in the AWS Region of your choice. client = boto3.client("bedrock-runtime", region_name="us-east-1") # Set the model ID, e.g., Llama 3 8b Instruct. model_id = "meta.llama3-8b-instruct-v1:0" # Define the prompt for the model. prompt = "Describe the purpose of a 'hello world' program in one line." # Embed the prompt in Llama 3's instruction format. formatted_prompt = f""" <|begin_of_text|> <|start_header_id|>user<|end_header_id|> {prompt} <|eot_id|> <|start_header_id|>assistant<|end_header_id|> """ # Format the request payload using the model's native structure. native_request = { "prompt": formatted_prompt, "max_gen_len": 512, "temperature": 0.5, } # Convert the native request to JSON. request = json.dumps(native_request) try: # Invoke the model with the request. streaming_response = client.invoke_model_with_response_stream( modelId=model_id, body=request ) # Extract and print the response text in real-time. for event in streaming_response["body"]: chunk = json.loads(event["chunk"]["bytes"]) if "generation" in chunk: print(chunk["generation"], end="") except (ClientError, Exception) as e: print(f"ERROR: Can't invoke '{model_id}'. Reason: {e}") exit(1)

如需 AWS SDK開發人員指南和程式碼範例的完整清單,請參閱使用此服務 AWS SDK。本主題也包含有關入門的資訊以及舊SDK版的詳細資訊。