文件範例儲存庫中有更多 AWS SDK可用的範例。 AWS SDK
本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。
使用 的 Amazon Bedrock 執行期範例 AWS SDK for .NET
下列程式碼範例說明如何 AWS SDK for .NET 搭配 Amazon Bedrock Runtime 使用 來執行動作和實作常見案例。
案例是程式碼範例,示範如何透過呼叫服務內的多個函數或與其他 結合來完成特定任務 AWS 服務。
每個範例都包含完整原始程式碼的連結,您可以在其中找到如何在內容中設定和執行程式碼的指示。
案例
下列程式碼範例示範如何建立遊樂場,以透過不同的模態與 Amazon Bedrock 基礎模型互動。
- AWS SDK for .NET
-
。NET Foundation Model (FM) 遊樂場是 。NET MAUI Blazor 範例應用程式,展示如何從 C# 程式碼使用 Amazon Bedrock。此範例顯示 .NET 和 C# 開發人員如何使用 Amazon Bedrock 建置支援 AI 的生成應用程式。您可以使用下列四個遊樂場來測試 Amazon Bedrock 基礎模型並與之互動:
-
文字遊樂場。
-
聊天遊樂場。
-
語音聊天遊樂場。
-
映像遊樂場。
此範例也會列出並顯示您可以存取的基礎模型及其特性。如需原始程式碼和部署指示,請參閱中的專案GitHub
。 此範例中使用的服務
Amazon Bedrock 執行期
-
AI21 實驗室 Jurassic-2
下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 AI21 Labs Jurassic-2API。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用 Bedrock 的 Converse 將文字訊息傳送至 AI21 Labs Jurassic-2API。
// Use the Converse API to send a text message to AI21 Labs Jurassic-2. using System; using System.Collections.Generic; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Jurassic-2 Mid. var modelId = "ai21.j2-mid-v1"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseAsync(request); // Extract and print the response text. string responseText = response?.Output?.Message?.Content?[0]?.Text ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱AWS SDK for .NET API參考 中的 Converse。
-
下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 AI21 Labs Jurassic-2API。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息。
// Use the native inference API to send a text message to AI21 Labs Jurassic-2. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Jurassic-2 Mid. var modelId = "ai21.j2-mid-v1"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { prompt = userMessage, maxTokens = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var response = await client.InvokeModelAsync(request); // Decode the response body. var modelResponse = await JsonNode.ParseAsync(response.Body); // Extract and print the response text. var responseText = modelResponse["completions"]?[0]?["data"]?["text"] ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModel中的 。 AWS SDK for .NET API
-
Amazon Titan Text
下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Amazon Titan TextAPI。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用 Bedrock 的 Converse 將文字訊息傳送至 Amazon Titan TextAPI。
// Use the Converse API to send a text message to Amazon Titan Text. using System; using System.Collections.Generic; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Titan Text Premier. var modelId = "amazon.titan-text-premier-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseAsync(request); // Extract and print the response text. string responseText = response?.Output?.Message?.Content?[0]?.Text ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱AWS SDK for .NET API參考 中的 Converse。
-
下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Amazon Titan Text,API並即時處理回應串流。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用 Bedrock 的 Converse 將文字訊息傳送至 Amazon Titan Text,API並即時處理回應串流。
// Use the Converse API to send a text message to Amazon Titan Text // and print the response stream. using System; using System.Collections.Generic; using System.Linq; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Titan Text Premier. var modelId = "amazon.titan-text-premier-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseStreamRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var chunk in response.Stream.AsEnumerable()) { if (chunk is ContentBlockDeltaEvent) { Console.Write((chunk as ContentBlockDeltaEvent).Delta.Text); } } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 ConverseStream中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Amazon Titan TextAPI。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息。
// Use the native inference API to send a text message to Amazon Titan Text. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Titan Text Premier. var modelId = "amazon.titan-text-premier-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { inputText = userMessage, textGenerationConfig = new { maxTokenCount = 512, temperature = 0.5 } }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var response = await client.InvokeModelAsync(request); // Decode the response body. var modelResponse = await JsonNode.ParseAsync(response.Body); // Extract and print the response text. var responseText = modelResponse["results"]?[0]?["outputText"] ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModel中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Amazon Titan Text 模型API,並列印回應串流。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息,並即時處理回應串流。
// Use the native inference API to send a text message to Amazon Titan Text // and print the response stream. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Titan Text Premier. var modelId = "amazon.titan-text-premier-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { inputText = userMessage, textGenerationConfig = new { maxTokenCount = 512, temperature = 0.5 } }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelWithResponseStreamRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var streamingResponse = await client.InvokeModelWithResponseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var item in streamingResponse.Body) { var chunk = JsonSerializer.Deserialize<JsonObject>((item as PayloadPart).Bytes); var text = chunk["outputText"] ?? ""; Console.Write(text); } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModelWithResponseStream中的 。 AWS SDK for .NET API
-
Anthropic Claude
下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Anthropic ClaudeAPI。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用 Bedrock 的 Converse 將文字訊息傳送至 Anthropic ClaudeAPI。
// Use the Converse API to send a text message to Anthropic Claude. using System; using System.Collections.Generic; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Claude 3 Haiku. var modelId = "anthropic.claude-3-haiku-20240307-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseAsync(request); // Extract and print the response text. string responseText = response?.Output?.Message?.Content?[0]?.Text ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱AWS SDK for .NET API參考 中的 Converse。
-
下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Anthropic Claude,API並即時處理回應串流。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用 Bedrock 的 Converse 將文字訊息傳送至 Anthropic Claude,API並即時處理回應串流。
// Use the Converse API to send a text message to Anthropic Claude // and print the response stream. using System; using System.Collections.Generic; using System.Linq; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Claude 3 Haiku. var modelId = "anthropic.claude-3-haiku-20240307-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseStreamRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var chunk in response.Stream.AsEnumerable()) { if (chunk is ContentBlockDeltaEvent) { Console.Write((chunk as ContentBlockDeltaEvent).Delta.Text); } } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 ConverseStream中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Anthropic ClaudeAPI。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息。
// Use the native inference API to send a text message to Anthropic Claude. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Claude 3 Haiku. var modelId = "anthropic.claude-3-haiku-20240307-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { anthropic_version = "bedrock-2023-05-31", max_tokens = 512, temperature = 0.5, messages = new[] { new { role = "user", content = userMessage } } }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var response = await client.InvokeModelAsync(request); // Decode the response body. var modelResponse = await JsonNode.ParseAsync(response.Body); // Extract and print the response text. var responseText = modelResponse["content"]?[0]?["text"] ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModel中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Anthropic Claude 模型API,並列印回應串流。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息,並即時處理回應串流。
// Use the native inference API to send a text message to Anthropic Claude // and print the response stream. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Claude 3 Haiku. var modelId = "anthropic.claude-3-haiku-20240307-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { anthropic_version = "bedrock-2023-05-31", max_tokens = 512, temperature = 0.5, messages = new[] { new { role = "user", content = userMessage } } }); // Create a request with the model ID, the user message, and an inference configuration. var request = new InvokeModelWithResponseStreamRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var streamingResponse = await client.InvokeModelWithResponseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var item in streamingResponse.Body) { var chunk = JsonSerializer.Deserialize<JsonObject>((item as PayloadPart).Bytes); var text = chunk["delta"]?["text"] ?? ""; Console.Write(text); } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModelWithResponseStream中的 。 AWS SDK for .NET API
-
Cohere Command
下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Cohere CommandAPI。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用 Bedrock 的 Converse 將文字訊息傳送至 Cohere CommandAPI。
// Use the Converse API to send a text message to Cohere Command. using System; using System.Collections.Generic; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Command R. var modelId = "cohere.command-r-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseAsync(request); // Extract and print the response text. string responseText = response?.Output?.Message?.Content?[0]?.Text ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱AWS SDK for .NET API參考 中的 Converse。
-
下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Cohere Command,API並即時處理回應串流。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用 Bedrock 的 Converse 將文字訊息傳送至 Cohere Command,API並即時處理回應串流。
// Use the Converse API to send a text message to Cohere Command // and print the response stream. using System; using System.Collections.Generic; using System.Linq; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Command R. var modelId = "cohere.command-r-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseStreamRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var chunk in response.Stream.AsEnumerable()) { if (chunk is ContentBlockDeltaEvent) { Console.Write((chunk as ContentBlockDeltaEvent).Delta.Text); } } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 ConverseStream中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Cohere Command R 和 R+API。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息。
// Use the native inference API to send a text message to Cohere Command R. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Command R. var modelId = "cohere.command-r-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { message = userMessage, max_tokens = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var response = await client.InvokeModelAsync(request); // Decode the response body. var modelResponse = await JsonNode.ParseAsync(response.Body); // Extract and print the response text. var responseText = modelResponse["text"] ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModel中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用調用模型 將文字訊息傳送至 Cohere CommandAPI。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息。
// Use the native inference API to send a text message to Cohere Command. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Command Light. var modelId = "cohere.command-light-text-v14"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { prompt = userMessage, max_tokens = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var response = await client.InvokeModelAsync(request); // Decode the response body. var modelResponse = await JsonNode.ParseAsync(response.Body); // Extract and print the response text. var responseText = modelResponse["generations"]?[0]?["text"] ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModel中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型API搭配回應串流,將文字訊息傳送至 Cohere Command。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息,並即時處理回應串流。
// Use the native inference API to send a text message to Cohere Command R // and print the response stream. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Command R. var modelId = "cohere.command-r-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { message = userMessage, max_tokens = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelWithResponseStreamRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var streamingResponse = await client.InvokeModelWithResponseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var item in streamingResponse.Body) { var chunk = JsonSerializer.Deserialize<JsonObject>((item as PayloadPart).Bytes); var text = chunk["text"] ?? ""; Console.Write(text); } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModel中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型API搭配回應串流,將文字訊息傳送至 Cohere Command。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息,並即時處理回應串流。
// Use the native inference API to send a text message to Cohere Command // and print the response stream. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Command Light. var modelId = "cohere.command-light-text-v14"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { prompt = userMessage, max_tokens = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelWithResponseStreamRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var streamingResponse = await client.InvokeModelWithResponseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var item in streamingResponse.Body) { var chunk = JsonSerializer.Deserialize<JsonObject>((item as PayloadPart).Bytes); var text = chunk["generations"]?[0]?["text"] ?? ""; Console.Write(text); } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModel中的 。 AWS SDK for .NET API
-
Meta Llama
下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Meta LlamaAPI。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用 Bedrock 的 Converse 將文字訊息傳送至 Meta LlamaAPI。
// Use the Converse API to send a text message to Meta Llama. using System; using System.Collections.Generic; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Llama 3 8b Instruct. var modelId = "meta.llama3-8b-instruct-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseAsync(request); // Extract and print the response text. string responseText = response?.Output?.Message?.Content?[0]?.Text ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱AWS SDK for .NET API參考 中的 Converse。
-
下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Meta Llama,API並即時處理回應串流。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用 Bedrock 的 Converse 將文字訊息傳送至 Meta Llama,API並即時處理回應串流。
// Use the Converse API to send a text message to Meta Llama // and print the response stream. using System; using System.Collections.Generic; using System.Linq; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Llama 3 8b Instruct. var modelId = "meta.llama3-8b-instruct-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseStreamRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var chunk in response.Stream.AsEnumerable()) { if (chunk is ContentBlockDeltaEvent) { Console.Write((chunk as ContentBlockDeltaEvent).Delta.Text); } } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 ConverseStream中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Meta Llama 2API。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息。
// Use the native inference API to send a text message to Meta Llama 2. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Llama 2 Chat 13B. var modelId = "meta.llama2-13b-chat-v1"; // Define the prompt for the model. var prompt = "Describe the purpose of a 'hello world' program in one line."; // Embed the prompt in Llama 2's instruction format. var formattedPrompt = $"<s>[INST] {prompt} [/INST]"; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { prompt = formattedPrompt, max_gen_len = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var response = await client.InvokeModelAsync(request); // Decode the response body. var modelResponse = await JsonNode.ParseAsync(response.Body); // Extract and print the response text. var responseText = modelResponse["generation"] ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModel中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Meta Llama 3API。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息。
// Use the native inference API to send a text message to Meta Llama 3. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USWest2); // Set the model ID, e.g., Llama 3 70b Instruct. var modelId = "meta.llama3-70b-instruct-v1:0"; // Define the prompt for the model. var prompt = "Describe the purpose of a 'hello world' program in one line."; // Embed the prompt in Llama 2's instruction format. var formattedPrompt = $@" <|begin_of_text|><|start_header_id|>user<|end_header_id|> {prompt} <|eot_id|> <|start_header_id|>assistant<|end_header_id|> "; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { prompt = formattedPrompt, max_gen_len = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var response = await client.InvokeModelAsync(request); // Decode the response body. var modelResponse = await JsonNode.ParseAsync(response.Body); // Extract and print the response text. var responseText = modelResponse["generation"] ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModel中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Meta Llama 2API,並列印回應串流。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息,並即時處理回應串流。
// Use the native inference API to send a text message to Meta Llama 2 // and print the response stream. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Llama 2 Chat 13B. var modelId = "meta.llama2-13b-chat-v1"; // Define the prompt for the model. var prompt = "Describe the purpose of a 'hello world' program in one line."; // Embed the prompt in Llama 2's instruction format. var formattedPrompt = $"<s>[INST] {prompt} [/INST]"; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { prompt = formattedPrompt, max_gen_len = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelWithResponseStreamRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var streamingResponse = await client.InvokeModelWithResponseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var item in streamingResponse.Body) { var chunk = JsonSerializer.Deserialize<JsonObject>((item as PayloadPart).Bytes); var text = chunk["generation"] ?? ""; Console.Write(text); } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModelWithResponseStream中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Meta Llama 3API,並列印回應串流。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息,並即時處理回應串流。
// Use the native inference API to send a text message to Meta Llama 3 // and print the response stream. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USWest2); // Set the model ID, e.g., Llama 3 70b Instruct. var modelId = "meta.llama3-70b-instruct-v1:0"; // Define the prompt for the model. var prompt = "Describe the purpose of a 'hello world' program in one line."; // Embed the prompt in Llama 2's instruction format. var formattedPrompt = $@" <|begin_of_text|><|start_header_id|>user<|end_header_id|> {prompt} <|eot_id|> <|start_header_id|>assistant<|end_header_id|> "; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { prompt = formattedPrompt, max_gen_len = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelWithResponseStreamRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var streamingResponse = await client.InvokeModelWithResponseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var item in streamingResponse.Body) { var chunk = JsonSerializer.Deserialize<JsonObject>((item as PayloadPart).Bytes); var text = chunk["generation"] ?? ""; Console.Write(text); } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModelWithResponseStream中的 。 AWS SDK for .NET API
-
混合式 AI
下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 MistralAPI。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用 Bedrock 的 Converse 將文字訊息傳送至 MistralAPI。
// Use the Converse API to send a text message to Mistral. using System; using System.Collections.Generic; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Mistral Large. var modelId = "mistral.mistral-large-2402-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseAsync(request); // Extract and print the response text. string responseText = response?.Output?.Message?.Content?[0]?.Text ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱AWS SDK for .NET API參考 中的 Converse。
-
下列程式碼範例示範如何使用 Bedrock 的 Converse 將文字訊息傳送至 Mistral,API並即時處理回應串流。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用 Bedrock 的 Converse 將文字訊息傳送至 Mistral,API並即時處理回應串流。
// Use the Converse API to send a text message to Mistral // and print the response stream. using System; using System.Collections.Generic; using System.Linq; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Mistral Large. var modelId = "mistral.mistral-large-2402-v1:0"; // Define the user message. var userMessage = "Describe the purpose of a 'hello world' program in one line."; // Create a request with the model ID, the user message, and an inference configuration. var request = new ConverseStreamRequest { ModelId = modelId, Messages = new List<Message> { new Message { Role = ConversationRole.User, Content = new List<ContentBlock> { new ContentBlock { Text = userMessage } } } }, InferenceConfig = new InferenceConfiguration() { MaxTokens = 512, Temperature = 0.5F, TopP = 0.9F } }; try { // Send the request to the Bedrock Runtime and wait for the result. var response = await client.ConverseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var chunk in response.Stream.AsEnumerable()) { if (chunk is ContentBlockDeltaEvent) { Console.Write((chunk as ContentBlockDeltaEvent).Delta.Text); } } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 ConverseStream中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Mistral 模型API。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息。
// Use the native inference API to send a text message to Mistral. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Mistral Large. var modelId = "mistral.mistral-large-2402-v1:0"; // Define the prompt for the model. var prompt = "Describe the purpose of a 'hello world' program in one line."; // Embed the prompt in Mistral's instruction format. var formattedPrompt = $"<s>[INST] {prompt} [/INST]"; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { prompt = formattedPrompt, max_tokens = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var response = await client.InvokeModelAsync(request); // Decode the response body. var modelResponse = await JsonNode.ParseAsync(response.Body); // Extract and print the response text. var responseText = modelResponse["outputs"]?[0]?["text"] ?? ""; Console.WriteLine(responseText); } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModel中的 。 AWS SDK for .NET API
-
下列程式碼範例示範如何使用叫用模型 將文字訊息傳送至 Mistral AI 模型API,並列印回應串流。
- AWS SDK for .NET
-
注意
還有更多 。 GitHub尋找完整範例,並了解如何在 AWS 程式碼範例儲存庫
中設定和執行。 使用叫用模型API傳送文字訊息,並即時處理回應串流。
// Use the native inference API to send a text message to Mistral // and print the response stream. using System; using System.IO; using System.Text.Json; using System.Text.Json.Nodes; using Amazon; using Amazon.BedrockRuntime; using Amazon.BedrockRuntime.Model; // Create a Bedrock Runtime client in the AWS Region you want to use. var client = new AmazonBedrockRuntimeClient(RegionEndpoint.USEast1); // Set the model ID, e.g., Mistral Large. var modelId = "mistral.mistral-large-2402-v1:0"; // Define the prompt for the model. var prompt = "Describe the purpose of a 'hello world' program in one line."; // Embed the prompt in Mistral's instruction format. var formattedPrompt = $"<s>[INST] {prompt} [/INST]"; //Format the request payload using the model's native structure. var nativeRequest = JsonSerializer.Serialize(new { prompt = formattedPrompt, max_tokens = 512, temperature = 0.5 }); // Create a request with the model ID and the model's native request payload. var request = new InvokeModelWithResponseStreamRequest() { ModelId = modelId, Body = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(nativeRequest)), ContentType = "application/json" }; try { // Send the request to the Bedrock Runtime and wait for the response. var streamingResponse = await client.InvokeModelWithResponseStreamAsync(request); // Extract and print the streamed response text in real-time. foreach (var item in streamingResponse.Body) { var chunk = JsonSerializer.Deserialize<JsonObject>((item as PayloadPart).Bytes); var text = chunk["outputs"]?[0]?["text"] ?? ""; Console.Write(text); } } catch (AmazonBedrockRuntimeException e) { Console.WriteLine($"ERROR: Can't invoke '{modelId}'. Reason: {e.Message}"); throw; }
-
如需API詳細資訊,請參閱 參考 InvokeModelWithResponseStream中的 。 AWS SDK for .NET API
-