AWS Mainframe Modernization Application Testing concepts - AWS Mainframe Modernization

AWS Mainframe Modernization Application Testing concepts

AWS Application Testing uses terms that other testing services or software packages might use with a slightly different meaning. The following sections explain how AWS Mainframe Modernization Application Testing uses this terminology.

Test case

A test case is the individual most atomic unit of action in your testing workflow. Usually, a test case is used to represent an independent unit of business logic that modifies data. Comparisons will be done for each test case. Test cases are added to a test suite. Test cases contain metadata about the data artifacts (datasets, databases) which the test case modifies and about the business functions that are triggered during the test case execution: batch jobs, 3270 interactive dialogs, and others. For example, the names and code pages of datasets.

Input data → Test case → Output data

Test cases can be either online or batch type:

  • Online 3270 screen test cases are test cases where user executes interactive screen dialogs (3270) to read, modify, or produce new business data (database and / or datasets records).

  • Batch test cases are test cases requiring to submit a batch to read, process, and modify or produce new business data (datasets and / or database records).

Test suite

Test suites have a collection of test cases that are run in a sequential order, one by one. Replay is done at a test suite level. All test cases in the test suite are run on the target testing environment when a test suite is replayed. If there are differences after comparing reference and replay testing artifacts, the differences will be shown at the test case level.

For example, Test Suite A:

Test Case 1, Test Case 2, Test Case 3, and so forth.

Test environment configuration

Test environment configuration allows you to set up the initial set of data and configuration parameters (or resources) with CloudFormation that you need to make the test run repeatable.

Upload

Uploads are done at a test suite level. During upload, you must provide an Amazon S3 location that contains the artifacts, data sets, and relational database CDC journals from the source mainframe to be compared against. These will be considered as reference data from the source mainframe. During replay, the generated replay data will be compared against the uploaded reference data to ensure application equivalency.

Replay

Replays are done at a test suite level. During replay, AWS Mainframe Modernization Application Testing uses the CloudFormation script to create the target test environment and run the application. Data sets and database records that are modified during replay are captured and compared against the reference data from the mainframe. Typically, you will upload on the mainframe once and then replay multiple times, until functional equivalency has been reached.

Compare

Comparisons are made automatically after a replay finishes successfully. During comparisons, the referenced data you uploaded and captured during the upload phase is compared against the replay data generated during the replay phase. Comparisons happen at an individual test case level for data sets, database records, and online screens separately.

Database comparisons

Application Testing employs a state-progress matching functionality when comparing changes in database records between the source and target applications. State-progress matching compares differences in each individual run INSERT, UPDATE, and DELETE statement, unlike comparing table rows at the end of the process. State-progress matching is more efficient than alternatives, providing faster and more accurate comparisons by only comparing changed data and detecting self-correcting errors in the transaction flow. By using CDC (Changed Data Capture) technology, Application Testing can detect individual relation database changes and compare them between the source and target.

Relation database changes are generated on source and target by the tested application code using DML (Data Modification Language) statements like SQL INSERT, UPDATE, or DELETE, but also indirectly when the application is using stored procedures, or when database triggers are set on some tables, or when CASCADE DELETE are used to guarantee referential integrity, triggering automatically additional deletions.

Dataset comparisons

Application Testing automatically compares the reference and replay data sets produced on the source (recording) and target replay) systems.

To compare data sets:

  1. Start with the same input data (data sets, database) on both the source and the target.

  2. Run your test cases on the source system (mainframe).

  3. Capture the produced data sets and upload them to an Amazon S3 bucket. You can transfer input data sets from the source to AWS using CDC journals, screens,and data sets.

  4. Specify the location of the Amazon S3 bucket where the mainframe data sets were uploaded when you uploaded the test case.

After replay is complete, Application Testing automatically compares the output reference and target data sets, showing if records are identical, equivalent, different, or missing. For example, date fields that are relative to the moment of workload execution (day + 1, end of current month, etc.) are automatically considered as equivalent. In addition, you can optionally define equivalence rules, so that records that are not identical still have the same business meaning, and are flagged as equivalent.

Comparison status

Application Testing uses the following comparison statuses: IDENTICAL, EQUIVALENT, and DIFFERENT.

IDENTICAL

The source and target data are exactly the same.

EQUIVALENT

The source and target data contain false differences considered as equivalences, such as dates or timestamps that do not affect functional equivalence when they are relative to the moment of workload execution. You can define equivalence rules to identify what these differences are. When all replayed test suites compared to their reference test suites show the status of IDENTICAL or EQUIVALENT, your test suite shows no differences.

DIFFERENT

The source and target data contains differences, such as a different number of records in a dataset, or different values in the same record.

Equivalence rules

A set of rules to identify false differences that can be considered equivalent results. Offline functional equivalence testing (OFET) inevitably causes differences for some results between the source and target systems. For example, update timestamps are different by design. Equivalence rules explain how to adjust for those differences and avoid false positives at comparison time. For example, if a date is runtime + 2 days in a particular data column, the equivalence rule describes it and accepts a time on the target system that is runtime on target + 2 days instead of a value that strictly equals the same column in the reference uploading.

Final-state data set comparison

The end state of data sets that have been created or modified, including all changes or updates made to the data sets from their initial state. For data sets, Application Testing looks at the records in those data sets at the end of a test case run, and compare the results.

State-progress database comparisons

Comparisons of changes done to database records as a sequence of individual DML (Delete, Update, Insert) statements. Application Testing compares individual changes (insert, update, or delete a table's row) from the source database to the target database, and will identify differences for each individual change. For example, an individual INSERT statement may be used to insert in a table a row with different values on the source database compared to the target database.

Functional equivalence (FE)

Two systems are considered functionally equivalent if they produce the same results on all observable operations, given the same input data. For example, two applications are considered functionally equivalent if the same input data produces identical output data (through screens, dataset changes or database changes).

Online 3270 screen comparisons

Compares the output of the mainframe 3270 screens with the output of the modernized application web screens when the target system is running under AWS Blu Age runtime in the AWS Cloud. And it compares the output of the mainframe 3270 screens with the 3270 screens of the rehosted application when the target system is running under Micro Focus runtime in the AWS Cloud.

Replay data

Replay data is used to describe the data generated by replaying a test suite on the target test environment. For example, replay data is generated when a test suite is running on an AWS Mainframe Modernization service application. Replay data is then compared to the reference data captured from the source. Every time you replay the workload in the target environment, a new generation of replay data is generated.

Reference data

Reference data is used to describe the data captured on the source mainframe. It is the reference to which replay (target) generated data will be compared. Usually, for every record on the mainframe that creates reference data, there will be many replays. This is because users typically capture the correct state of the application on the mainframe, and replay the test cases on the target modernized application to validate equivalency. If bugs are found, they are fixed and the test cases are replayed again. Often, multiple cycles of replay, fixing bugs, and replaying again to validate the occurrence. This is called the capture once, replay multiple times paradigm of testing.

Upload, Replay, and Compare

Application Testing operates in three steps:

  • Upload: captures the referenced data created on the mainframe for each test case of a test scenario. These can include 3270 online screens, data sets, and database records.

    • For online 3270 screens, you must use the Blu Insights terminal emulator to capture your source workload. For more information see, Blu Insights documentation.

    • For data sets, you will need to capture the data sets produced by each test case on the mainframe by using common tools, like FTP or the dataset transfer service part of AWS Mainframe Modernization.

    • For database changes, you use theAWS Mainframe Modernization Data Replication with Precisely documentation to capture and generate CDC journals containing changes.

  • Replay: The test suite is replayed in the target environment. All test cases specified in the test suite run. Specified data types created by the individual test cases, such as data sets, relational database changes, or 3270 screens, will be captured with automation. These data are known as replay data, and will be compared against the reference data captured during the upload phase.

    Note

    The relational database changes will require DMS-specific configuration options in your initial condition CloudFormation template.

  • Compare: the source testing reference data, and the target replay data are compared, and the results will be displayed to you as identical, different, equivalent, or missing data.

Differences

Indicates differences have been detected between the reference and replay data sets by data comparison. For example, a field in an online 3270 screen that is showing different values from a business logic standpoint between the source mainframe and the target modernized application will be considered as a difference. Another example is a upload in a data set that is not identical between source and target applications.

Equivalencies

Equivalent records are records that are different between the reference and replay data sets, but should not be treated as different from a business logic standpoint. For example, a record containing the timestamp of when the dataset was produced (workload execution time). Using customizable equivalency rules, you can instruct Application Testing to treat such false positive difference as an equivalence, even if it shows different values between reference and replay data.

Source application

The source mainframe application to be compared against.

Target application

The new or modified application on which testing is done and which will be compared to the source application to detect any defects and to achieve functional equivalence between source and target applications. The target application is typically running in the AWS Cloud.