Update the container image - Distributed Load Testing on AWS

Update the container image

Distributed Load Testing on AWS uses AWS CodePipeline to help build and register the Taurus container image in Amazon Elastic Container Registry (Amazon ECR). During initial deployment, a container ZIP file is uploaded to Amazon Simple Storage Service (Amazon S3). AWS CodeBuild unpacks the ZIP file, builds the container image, and sends it to Amazon ECR for registration.


      Container image pipeline

Figure 2: Container image pipeline

To update the Taurus container image, upload a new or updated container ZIP file to Amazon S3. AWS CodePipeline will automatically start the build and registration process with the new file.

The following example shows the container file.

FROM blazemeter/taurus # taurus includes python and pip RUN /usr/bin/python3 -m pip install --upgrade pip RUN pip install --no-cache-dir awscli # Taurus working directory = /bzt-configs ADD ./load-test.sh /bzt-configs/ RUN chmod 755 /bzt-configs/load-test.sh ENTRYPOINT ["sh", "-c","./load-test.sh"]

In addition to a container file, the container ZIP file contains the following Bash script that downloads the test configuration from Amazon S3 before running the Taurus program.

#!/bin/bash # set a uuid for the results xml file name in S3 UUID=$(cat /proc/sys/kernel/random/uuid) echo "S3_BUCKET:: ${S3_BUCKET}" echo "TEST_ID:: ${TEST_ID}" echo "TEST_TYPE:: ${TEST_TYPE}" echo "FILE_TYPE:: ${FILE_TYPE}" echo "PREFIX:: ${PREFIX}" echo "UUID ${UUID}" echo "Download test scenario" aws s3 cp s3://$S3_BUCKET/test-scenarios/$TEST_ID.json test.json # download JMeter jmx file if [ "$TEST_TYPE" != "simple" ]; then # Copy *.jar to JMeter library path. See the Taurus JMeter path: https://gettaurus.org/docs/JMeter/ JMETER_LIB_PATH=`find ~/.bzt/jmeter-taurus -type d -name "lib"` echo "cp $PWD/*.jar $JMETER_LIB_PATH" cp $PWD/*.jar $JMETER_LIB_PATH if [ "$FILE_TYPE" != "zip" ]; then aws s3 cp s3://$S3_BUCKET/public/test-scenarios/$TEST_TYPE/$TEST_ID.jmx ./ else aws s3 cp s3://$S3_BUCKET/public/test-scenarios/$TEST_TYPE/$TEST_ID.zip ./ unzip $TEST_ID.zip # only looks for the first jmx file. JMETER_SCRIPT=`find . -name "*.jmx" | head -n 1` if [ -z "$JMETER_SCRIPT" ]; then echo "There is no JMeter script in the zip file." exit 1 fi sed -i -e "s|$TEST_ID.jmx|$JMETER_SCRIPT|g" test.json fi fi #Download python script if [ -z "$IPNETWORK" ]; then python3 $SCRIPT else python3 $SCRIPT $IPNETWORK $IPHOSTS fi echo "Running test" stdbuf -i0 -o0 -e0 bzt test.json -o modules.console.disable=true | stdbuf -i0 -o0 -e0 tee -a result.tmp | sed -u -e "s|^|$TEST_ID |" CALCULATED_DURATION=`cat result.tmp | grep -m1 "Test duration" | awk -F ' ' '{ print $5 }' | awk -F ':' '{ print ($1 * 3600) + ($2 * 60) + $3 }'` # upload custom results to S3 if any # every file goes under $TEST_ID/$PREFIX/$UUID to distinguish the result correctly if [ "$TEST_TYPE" != "simple" ]; then if [ "$FILE_TYPE" != "zip" ]; then cat $TEST_ID.jmx | grep filename > results.txt else cat $JMETER_SCRIPT | grep filename > results.txt fi sed -i -e 's/<stringProp name="filename">//g' results.txt sed -i -e 's//<stringProp>//g' results.txt sed -i -e 's/ //g' results.txt echo "Files to upload as results" cat results.txt files=(`cat results.txt`) for f in "${files[@]}"; do p="s3://$S3_BUCKET/results/$TEST_ID/JMeter_Result/$PREFIX/$UUID/$f" if [[ $f = /* ]]; then p="s3://$S3_BUCKET/results/$TEST_ID/JMeter_Result/$PREFIX/$UUID$f" fi echo "Uploading $p" aws s3 cp $f $p done fi if [ -f /tmp/artifacts/results.xml ]; then echo "Validating Test Duration" TEST_DURATION=`xmlstarlet sel -t -v "/FinalStatus/TestDuration" /tmp/artifacts/results.xml` if (( $(echo "$TEST_DURATION > $CALCULATED_DURATION" | bc -l) )); then echo "Updating test duration: $CALCULATED_DURATION s" xmlstarlet ed -L -u /FinalStatus/TestDuration -v $CALCULATED_DURATION /tmp/artifacts/results.xml fi echo "Uploading results" aws s3 cp /tmp/artifacts/results.xml s3://$S3_BUCKET/results/${TEST_ID}/${PREFIX}-${UUID}.xml else echo "There might be an error happened while the test." fi

In addition to the Dockerfile and the bash script, two Python scripts are also included in the zip file. Each task runs a Python script from within the bash script. The worker tasks run the ecslistener.py script, while the leader task will run the ecscontroller.py script. The ecslistener.py script creates a socket on port 50,000 and waits for a message. The ecscontroller.py script connects to the socket and sends the start test message to the worker tasks, which allows them to start simultaneously.