MCP server integration
If you deployed the optional MCP server component during solution deployment, you can integrate the Distributed Load Testing solution with AI development tools that support the Model Context Protocol. The MCP server provides programmatic access to retrieve, manage, and analyze load tests through AI assistants.
Customers can connect to the DLT MCP Server using the client of their choice (Amazon Q, Claude, etc.), which each have slightly different configuration instructions. This section provides setup instructions for MCP Inspector, Amazon Q CLI, Cline, and Amazon Q Suite.
Step 1: Get MCP endpoint and access token
Before configuring any MCP client, you need to retrieve your MCP server endpoint and access token from the DLT web console.
-
Navigate to the MCP Server page in the Distributed Load Testing web console.
-
Locate the MCP Server Endpoint section.
-
Copy the endpoint URL using the Copy Endpoint URL button. The endpoint URL follows the format:
https://[api-id].execute-api.[region].amazonaws.com/[stage]/gateway/backend-agent/sse/mcp -
Locate the Access Token section.
-
Copy the access token using the Copy Access Token button.
Important
Keep your access token secure and do not share it publicly. The token provides read-only access to your Distributed Load Testing solution through the MCP interface.
Step 2: Test with MCP Inspector
The Model Context Protocol offers MCP Inspector
Note
MCP Inspector requires version 0.17 or later. All requests can also be made with JSON RPC directly, but MCP Inspector provides a more user-friendly interface.
Install and launch MCP Inspector
-
Install npm if necessary.
-
Run the following command to launch MCP Inspector:
npx @modelcontextprotocol/inspector
Configure the connection
-
In the MCP Inspector interface, enter your MCP Server Endpoint URL.
-
Add an Authorization header with your access token.
-
Click Connect to establish the connection.
Invoke tools
Once connected, you can test the available MCP tools:
-
Browse the list of available tools in the left panel.
-
Select a tool (for example,
list_scenarios). -
Provide any required parameters.
-
Click Invoke to execute the tool and view the response.
Step 3: Configure AI development clients
After verifying your MCP server connection with MCP Inspector, you can configure your preferred AI development client.
Amazon Q CLI
Amazon Q CLI provides command-line access to AI-assisted development with MCP server integration.
Configuration steps
-
Edit the
mcp.jsonconfiguration file. For more information on configuration file location, refer to Configuring remote MCP servers in the Amazon Q Developer User Guide. -
Add your DLT MCP server configuration:
{ "mcpServers": { "dlt-mcp": { "type": "http", "url": "https://[api-id].execute-api.[region].amazonaws.com/[stage]/gateway/backend-agent/sse/mcp", "headers": { "Authorization": "your_access_token_here" } } } }
Verify the configuration
-
In a terminal, type
qto launch Amazon Q CLI. -
Type
/mcpto see all available MCP servers. -
Type
/toolsto see available tools provided bydlt-mcpand other configured MCP servers. -
Verify that
dlt-mcpsuccessfully initializes.
Cline
Cline is an AI coding assistant that supports MCP server integration.
Configuration steps
-
In Cline, navigate to Manage MCP Servers > Configure > Configure MCP Servers.
-
Update the
cline_mcp_settings.jsonfile:{ "mcpServers": { "dlt-mcp": { "type": "streamableHttp", "url": "https://[api-id].execute-api.[region].amazonaws.com/[stage]/gateway/backend-agent/sse/mcp", "headers": { "Authorization": "your_access_token_here" } } } } -
Save the configuration file.
-
Restart Cline to apply the changes.
Amazon Q Suite
Amazon Q Suite provides a comprehensive AI assistant platform with support for MCP server actions.
Prerequisites
Before configuring the MCP server in Amazon Q Suite, you need to retrieve OAuth credentials from your DLT deployment’s Cognito user pool:
-
Navigate to the AWS CloudFormation console
. -
Select the Distributed Load Testing stack.
-
In the Outputs tab, locate and copy the Cognito User Pool ID associated with your DLT deployment.
-
Navigate to the Amazon Cognito console
. -
Select the user pool using the User Pool ID from the CloudFormation outputs.
-
In the left navigation, select App integration > App clients.
-
Locate the app client with the name ending in
m2m(machine-to-machine). -
Copy the Client ID and Client secret.
-
Get the user pool domain from the Domain tab.
-
Construct the token endpoint URL by appending
/oauth2/tokento the end of the domain.
Configuration steps
-
In Amazon Q Suite, create a new agent or select an existing agent.
-
Add an agent prompt that describes how to interact with the DLT MCP server.
-
Add a new action and select MCP server action.
-
Configure the MCP server details:
-
MCP Server URL: Your DLT MCP endpoint
-
Authentication Type: Service-based authentication
-
Token Endpoint: Your Cognito token endpoint URL
-
Client ID: The client ID from the m2m app client
-
Client Secret: The client secret from the m2m app client
-
-
Save the MCP server action configuration.
-
Add the new MCP server action to your agent.
Launch and test the agent
-
Launch the agent in Amazon Q Suite.
-
Start a conversation with the agent using natural language prompts.
-
The agent will use the MCP tools to retrieve and analyze your load testing data.
Example prompts
The following examples demonstrate how to interact with your AI assistant to analyze load testing data through the MCP interface. Customize the test IDs, date ranges, and criteria to match your specific testing needs.
For detailed information about available MCP tools and their parameters, refer to MCP tools specification in the Developer Guide.
Simple test results query
Natual language interaction with the MCP Server can be as simple as Show me the load tests that have completed in the last 24 hours with their associated completion status or can be more descriptive such as
Use list_scenarios to find my load tests. Then use get_latest_test_run to show me the basic execution data and performance metrics for the most recent test. If the results look concerning, also get the detailed performance metrics using get_test_run.
Interactive performance analysis with progressive disclosure
I need to analyze my load test performance, but I'm not sure which specific tests to focus on. Please help me by: 1. First, use list_scenarios to show me available test scenarios 2. Ask me which tests I want to analyze based on the list you show me 3. For my selected tests, use list_test_runs to get the test run history 4. Then use get_test_run with the test_run_id to get detailed response times, throughput, and error rates 5. If I want to compare tests, use get_baseline_test_run to compare against the baseline 6. If there are any issues, use get_test_run_artifacts to help me understand what went wrong Please guide me through this step by step, asking for clarification whenever you need more specific information.
Production readiness validation
Help me validate if my API is ready for production deployment: 1. Use list_scenarios to find recent test scenarios 2. For the most recent test scenario, use get_latest_test_run to get basic execution data 3. Use get_test_run with that test_run_id to get detailed response times, error rates, and throughput 4. Use get_scenario_details with the test_id to show me what load patterns and endpoints were tested 5. If I have a baseline, use get_baseline_test_run to compare current results with the baseline 6. Provide a clear go/no-go recommendation based on the performance data 7. If there are any concerns, use get_test_run_artifacts to help identify potential issues My SLA requirements are: response time under [X]ms, error rate under [Y]%.
Performance trend analysis
Analyze the performance trend for my load tests over the past [TIME_PERIOD]: 1. Use list_scenarios to get all test scenarios 2. For each scenario, use list_test_runs with start_date and end_date to get tests from that period 3. Use get_test_run for the key test runs to get detailed metrics 4. Use get_baseline_test_run to compare against the baseline 5. Identify any significant changes in response times, error rates, or throughput 6. If you detect performance degradation, use get_test_run_artifacts on the problematic tests to help identify causes 7. Present the trend analysis in a clear format showing whether performance is improving, stable, or degrading Focus on completed tests and limit results to [N] tests if there are too many.
Troubleshooting failed tests
Help me troubleshoot my failed load tests: 1. Use list_scenarios to find test scenarios 2. For each scenario, use list_test_runs to find recent test runs 3. Use get_test_run with the test_run_id to get the basic execution data and failure information 4. Use get_test_run_artifacts to get detailed error messages and logs 5. Use get_scenario_details to understand what was being tested when it failed 6. If I have a similar test that passed, use get_baseline_test_run to identify differences 7. Summarize the causes of failure and suggest next steps for resolution Show me the most recent [N] failed tests from the past [TIME_PERIOD].