How Distributed Load Testing on AWS works - Distributed Load Testing on AWS

How Distributed Load Testing on AWS works

The following detailed breakdown shows the steps involved in running a test scenario.

Test workflow

image3

  1. You use the web console to submit a test scenario that includes the configuration details to the solution’s API.

  2. The test scenario configuration is uploaded to the Amazon Simple Storage Service (Amazon S3) as a JSON file (s3://<bucket-name>/test-scenarios/<$TEST_ID>/<$TEST_ID>.json).

  3. An AWS Step Functions state machine runs using the test ID, task count, test type, and file type as the AWS Step Functions state machine input. If the test is scheduled, it will first create a CloudWatch Events rule, which triggers AWS Step Functions on the specified date. For more details on the scheduling workflow, refer to the Test scheduling workflow section of this guide.

  4. Configuration details are stored in the scenarios Amazon DynamoDB table.

  5. In the AWS Step Functions task runner workflow, the task-status-checker AWS Lambda function checks if Amazon Elastic Container Service (Amazon ECS) tasks are already running for the same test ID. If tasks with the same test ID are found running, it causes an error. If there are no Amazon ECS tasks running in the AWS Fargate cluster, the function returns the test ID, task count, and test type.

  6. The task-runner AWS Lambda function gets the task details from the previous step and runs the Amazon ECS worker tasks in the AWS Fargate cluster. The Amazon ECS API uses the RunTask action to run the worker tasks. These worker tasks are launched and then wait for a start message from the leader task in order to begin the test. The RunTask action is limited to 10 tasks per definition. If your task count is more than 10, the task definition will run multiple times until all worker tasks have been started. The function also generates a prefix to distinguish the current test in the results-parse AWS Lambda function.

  7. The task-status-checker AWS Lambda function checks if all the Amazon ECS worker tasks are running with the same test ID. If tasks are still provisioning, it waits for one minute and checks again. Once all Amazon ECS tasks are running, it returns the test ID, task count, test type, all task IDs and prefix and passes it to the task-runner function.

  8. The task-runner AWS Lambda function runs again, this time launching a single Amazon ECS task to act as the leader node. This ECS task sends a start test message to each of the worker tasks in order to start the tests simultaneously.

  9. The task-status-checker AWS Lambda function again checks if Amazon ECS tasks are running with the same test ID. If tasks are still running, it waits for one minute and checks again. Once there are no running Amazon ECS tasks, it returns the test ID, task count, test type, and prefix.

  10. When the task-runner AWS Lambda function runs the Amazon ECS tasks in the AWS Fargate cluster, each task downloads the test configuration from Amazon S3 and starts the test.

  11. Once the tests are running, the average response time, number of concurrent users, number of successful requests, and number of failed requests for each task is logged in Amazon CloudWatch and can be viewed in a CloudWatch dashboard.

  12. If you included live data in the test, the solution filters real-time test results in CloudWatch using a subscription filter. Then the solution passes the data to a Lambda function.

  13. The Lambda function then structures the data received and publishes it to an AWS IoT Core topic.

  14. The web console subscribes to the AWS IoT Core topic for the test and receives the data published to the topic to graph the real-time data while the test is running.

  15. When the test is complete, the container images export a detailed report as an XML file to Amazon S3. Each file is given a UUID for the filename. For example, s3://dlte-bucket/test-scenarios/<$TEST_ID>/results/<$UUID>.json.

  16. When the XML files are uploaded to Amazon S3, the results-parser AWS Lambda function reads the results in the XML files starting with the prefix and parses and aggregates all the results into one summarized result.

  17. The results-parser AWS Lambda function writes the aggregate result to an Amazon DynamoDB table.

MCP Server workflow (Optional)

If you deploy the optional MCP Server integration, AI agents can access and analyze your load testing data through the following workflow:

MCP Server architecture

MCP Server architecture showing integration with DLT components
  1. Customer interaction - The customer interacts with DLT’s MCP via the MCP Endpoint hosted by AWS AgentCore Gateway. AI agents connect to this endpoint to request access to load testing data.

  2. Authorization - AgentCore Gateway handles authorization against the Solution Cognito user pool application client. The gateway validates the user’s Cognito token to ensure they have permission to access the DLT MCP server. Authorized users are granted access with agent tool access limited to read-only operations.

  3. Tool specification - AgentCore Gateway connects to the DLT MCP Server Lambda function. A tool specification defines the available tools that AI agents can use to interact with your load testing data.

  4. Read-only API access - The Lambda function is scoped to read-only API access through the existing DLT API Gateway endpoints. The function provides four primary operations:

    • List scenarios - Retrieve a list of test scenarios from the DynamoDB scenarios table

    • Get scenario test results - Access detailed test results for specific scenarios from DynamoDB and S3

    • Get Fargate load test runners - Query information about running Fargate tasks in the ECS cluster

    • Get available Regional stacks - Retrieve information about deployed regional infrastructure from CloudFormation

The MCP Server integration leverages the existing DLT infrastructure (API Gateway, Cognito, DynamoDB, S3) to provide secure, read-only access to test data for AI-powered analysis and insights.