Modernize and deploy mainframe applications using AWS Transform and Terraform
Mason Cahill, Polaris Jhandi, Prachi Khanna, Sivasubramanian Ramani, and Santosh Kumar Singh, Amazon Web Services
Summary
AWS Transform can accelerate large-scale modernization of .NET, mainframe, and VMware workloads. It deploys specialized AI agents that automate complex tasks like assessments, code analysis, refactoring, decomposition, dependency mapping, validation, and transformation planning. This pattern demonstrates how to use AWS Transform to modernize a mainframe application and then deploy it to AWS infrastructure by using Hashicorp Terraform. These step-by-step instructions help you transform CardDemo, which is a sample open source mainframe application, from COBOL to a modern Java application.
Prerequisites and limitations
Prerequisites
An active AWS account
Administrative permissions to create AWS resources and deploy applications
Terraform version 1.5.7 or higher, configured
AWS Provider for Terraform, configured
AWS IAM Identity Center, enabled
AWS Transform, enabled
A user, onboarded to an AWS Transform workspace with a contributor role that can run transformation jobs
Limitations
AWS Transform is available only in some AWS Regions. For a complete list of supported Regions, see Supported Regions for AWS Transform.
There is a service quota for mainframe transformation capabilities in AWS Transform. For more information, see Quotas for AWS Transform.
To collaborate on a shared workspace, all users must be registered users of the same instance of AWS IAM Identity Center that is associated with your instance of the AWS Transform web application.
The Amazon Simple Storage Service (Amazon S3) bucket and AWS Transform must be in the same AWS account and Region.
Architecture
The following diagram shows the end-to-end modernization of the legacy application and deployment to the AWS Cloud. Application and database credentials are stored in AWS Secrets Manager, and Amazon CloudWatch provides monitoring and logging capabilities.
The diagram shows the following workflow:
Through AWS IAM Identity Center, the user authenticates and accesses AWS Transform in the AWS account.
The user uploads the COBOL mainframe code to the Amazon S3 bucket and initiates the transformation in AWS Transform.
AWS Transform modernizes the COBOL code into cloud-native Java code and stores the modernized code in the Amazon S3 bucket.
Terraform creates the AWS infrastructure to deploy the modernized application, including an Application Load Balancer, Amazon Elastic Compute Cloud (Amazon EC2) instance, and Amazon Relational Database Service (Amazon RDS) database. Terraform deploys the modernized code to the Amazon EC2 instance.
The VSAM files are uploaded to Amazon EC2 and are migrated from Amazon EC2 to the Amazon RDS database.
Tools
AWS services
Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down. In this pattern, SQL Server failover cluster instances are installed on Amazon EC2 instances.
AWS IAM Identity Center helps you centrally manage single sign-on (SSO) access to your AWS accounts and cloud applications.
Amazon Relational Database Service (Amazon RDS) helps you set up, operate, and scale a relational database in the AWS Cloud.
AWS Secrets Manager helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
Amazon Simple Storage Service (Amazon S3) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.
AWS Transform uses agentic AI to help you accelerate the modernization of legacy workloads, such as .NET, mainframe, and VMware workloads.
Other tools
Apache Maven is an open source software project management and build automation tool for Java projects.
Apache Tomcat is an open source Servlet container and web server for Java code.
HashiCorp Terraform is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.
Spring Boot is an open source framework built on top of the Spring Framework in Java.
Code repository
The code for this pattern is available in the GitHub Mainframe Transformation E2E repository. This pattern uses the open source AWS CardDemo mainframe application as a sample application.
Best practices
Assign full ownership of code and resources targeted for migration.
Develop and test a proof of concept before scaling to a full migration.
Secure commitment from all stakeholders.
Establish clear communication channels.
Define and document minimum viable product (MVP) requirements.
Set clear success criteria.
Epics
Task | Description | Skills required |
---|
Create a bucket. | Create an Amazon S3 bucket in the same AWS account and Region where AWS Transform is enabled. You use this bucket to store the mainframe application code, data and additional scripts required to build and run the application. AWS Transform uses this bucket to store the refactored code and other files associated with the transformation. For instructions, see Creating a bucket in the Amazon S3 documentation. | General AWS, AWS administrator |
Set the CORS permissions for the bucket. | When setting up your bucket for AWS Transform access, you need to configure cross-origin resource sharing (CORS) for the bucket. If this is not set up correctly, you might not be able to use the inline viewing or file comparison functionalities of AWS Transform. For instructions about how to configure CORS for a bucket, see Using cross-origin resource sharing in the Amazon S3 bucket. For the policy, see S3 bucket CORS permissions in the AWS Transform documentation. | General AWS, AWS administrator |
Prepare the sample mainframe application code. | Enter the following command to clone the CardDemo repository to your local workstation: git clone https://github.com/aws-samples/aws-mainframe-modernization-carddemo.git
Compress the aws-mainframe-modernization-carddemo folder into a ZIP file. Upload the ZIP file to the Amazon S3 bucket that you created. For instructions, see Uploading objects in the Amazon S3 documentation.
| General AWS, App developer |
Task | Description | Skills required |
---|
Set up the AWS Transform job. | Access the AWS Transform web application by logging in with your credentials. Create a new workspace by following the instructions in Setting up your workspace in the AWS Transform documentation. On your workspace landing page, choose Ask AWS Transform to create a job. Next, choose Mainframe Modernization as the type of job. In the chat window, enter Transform code to Java. Review the suggested job type, name, and objective. To confirm, enter Yes. Choose Create job.
| App developer, App owner |
Set up a connector. | Set up a connector with the Amazon S3 bucket that you created. For instructions, see Set up a connector in the AWS Transform documentation. When prompted, enter the path for the aws-mainframe-modernization-carddemo zip file in the Amazon S3 bucket. Wait for the analysis to complete.
| App developer, App owner |
Transform the code. | Review the results of the code analysis according to the instructions in Analyze code in the AWS Transform documentation. Refactor the mainframe code according to the instructions in Refactor code in the AWS Transform documentation. For the sample CardDemo mainframe application, you can accept the default settings. Wait for the refactoring to complete. Choose View results to see the path for the refactored code in the Amazon S3 bucket. Make note of this file path. You will need it later.
| App developer, App owner |
Task | Description | Skills required |
---|
Update the templates. | Enter the following command to clone the Mainframe Transformation E2E repository to your local workstation: git clone https://github.com/aws-samples/sample-mainframe-transformation-e2e.git
Enter the following command to retrieve your current public IP address: curl checkip.amazonaws.com
Enter the following command to navigate to the infra directory: cd mainframe-transformation-e2e/infra
Open the variables.tf file. Replace YOUR_IP_ADDRESS_HERE with your IP address. If you have a public hosted zone, do the following: Replace hosted_zone_name with your hosted zone name. Set hosted_zone_enabled to true .
If you do not have a public hosted zone, do the following: Enter the following commands to generate a self-signed certificate: openssl genrsa 2048 > my-private-key.pem
openssl req -new -x509 -nodes -sha256 -days 365 -key my-private-key.pem -outform PEM -out my-certificate.pem
Enter the following command to import the certificate into AWS Certificate Manager (ACM): aws acm import-certificate \
--certificate fileb://my-certificate.pem \
--private-key fileb://my-private-key.pem
The outpout of this command includes the Amazon Resource Name (ARN) of the imported certificate. Replace self_signed_cert_arn with the ARN of your certificate. Set hosted_zone_enabled to false .
Change aws_region to the target Region. The default is us-east-1 . Save and close the variables.tf file.
| General AWS, AWS administrator |
Deploy the infrastructure. | Enter the following command to initialize Terraform: terraform init
Enter the following command to generate an execution plan: terraform plan
Review the plan, and validate the resources and infrastructure components that will be created. Enter the following command to deploy the infrastructure: terraform apply
When prompted, enter yes to confirm the deployment. Wait until the deployment is completed.
| Terraform |
Task | Description | Skills required |
---|
Install the required software. | Connect to your Amazon EC2 instance by using AWS Systems Manager Session Manager. Enter the following command to switch to the root user: sudo su -
Enter the following command to navigate to the scripts directory: cd /opt/scripts
Review the install_software.sh script. This script installs Java 17, Apache Maven, and Apache Tomcat 10.0.23. Update the scripts as needed for your use case. Enter the following command to make the script executable: chmod +x install_software.sh
Enter the following command to run the script: ./install_software.sh
| App developer, Migration engineer |
Verify software installation. | Enter the following command to start the Tomcat server: /opt/tomcat/apache-tomcat-10.0.23/bin/startup.sh
Enter the following command to verify the web server response: curl http://localhost:8080
The output should confirm that Tomcat is serving an HTML page.
| App developer, Migration engineer |
Task | Description | Skills required |
---|
Download and extract the generated code. | Enter the following command to make the download_and_extract.sh script executable. This script downloads the refactored code and Gapwalk runtime library stored in the Amazon S3 bucket: chmod +x /opt/scripts/download_and_extract.sh
Enter the following command to run the script. Replace <file_path> with the path to the generated.zip file in your Amazon S3 bucket: ./download_and_extract.sh <file_path>
The file path is typically s3://<bucket-name>/transform_output/<aws_transform_job_id>/codetransformation/generated.zip . Enter the following command to navigate into the shared folder: cd /opt/runtime/velocity/shared
Enter the following command to copy the deploy-velocity-runtime.sh script: cp /opt/scripts/deploy-velocity-runtime.sh .
Enter the following command to make the copied script executable: chmod +x deploy-velocity-runtime.sh
Enter the following command to run the script. This script copies all the required Web Application Archive (WAR) dependencies present in the Project Object Model (POM) files into the repository folder: ./deploy-velocity-runtime.sh
Verify successful execution by checking that that there are no errors and that the required WAR dependencies are installed in your local Maven repository.
| App developer, Migration engineer |
Build the modernized application. | Enter the following command to navigate to the app-pom project directory: cd /opt/codebase/app-pom/
Enter the following command to install Maven: mvn clean install
Wait for the installation and build to complete. When running this command for the CardDemo application, you might encounter warning messages for the app-web project. You can safely ignore these warnings. After successful build completion, confirm the presence of app-service/target/app-service-1.0.0.war and app-web/target/app-web-1.0.0.war . Do not restart the Tomcat server at this stage. It would result in errors due to the absence of required databases. You must set up the database before you can restart the server.
| App developer, Migration engineer |
Task | Description | Skills required |
---|
Create the database and JICS schemas. | Enter the following command to rename the csd commands folder to csd_commands . This remove the spaces from the folder name: mv /opt/codebase/extra/csd\ commands/ /opt/codebase/extra/csd_commands
Enter the following command to navigate into the scripts directory: cd /opt/scripts
Enter the following command to make the database migration script executable: chmod +x database_migration_setup.sh
Enter the following commands to configure the following variables as parameters: RDS_ENDPOINT=<database_endpoint>
SECRET_NAME=<secret_name>
JICS_SQL_SCRIPT_DIR=/opt/runtime/velocity/jics/sql/jics.sql
INIT_JICS_SQL_SCRIPT_DIR=/opt/codebase/extra/csd_commands/sql/aws-mainframe-modernization-carddemo-main/app/csd/initJics.sql
Where: Enter the following command to run the database migration script: ./database_migration_setup.sh $RDS_ENDPOINT $SECRET_NAME $JICS_SQL_SCRIPT_DIR $INIT_JICS_SQL_SCRIPT_DIR
Enter the following command to connect to the database from your Amazon EC2 instance: psql -h <Your Amazon RDS Endpoint> -U foo -p 5432 postgres
When prompted, enter your database credentials.
| App developer, Migration engineer |
Validate database creation. | Enter the following command to view all databases: \l
Enter the following command to switch to the jics database: \c jics
Enter the following command to review a list of the created tables: \dt
| App developer, Migration engineer |
Migrate data to the JICS database. | Enter the following command to make the execute_listcat_sql.sh script executable: chmod +x execute_listcat_sql.sh
Enter the following command to configure the PATH_TO_LISTCAT_SQL_FILES variable, which is the directory that contains your LISTCAT SQL files: PATH_TO_LISTCAT_SQL_FILES=/opt/codebase/extra/listcat/sql/cluster/aws-mainframe-modernization-carddemo-main/app/catlg
Make sure that the RDS_ENDPOINT , SECRET_NAME , and PATH_TO_LISTCAT_SQL_FILES variables are properly set according to the previous instructions. Enter the following command to run the execute_listcat_sql.sh script: ./execute_listcat_sql.sh $RDS_ENDPOINT $SECRET_NAME $PATH_TO_LISTCAT_SQL_FILES
This script updates the VSAM file properties in the JICS database and runs the necessary queries to modify the database.
| App developer, Migration engineer |
Task | Description | Skills required |
---|
Install the modernized application on the Amazon EC2 instance. | Enter the following command to make the application_installer.sh script executable: chmod +x /opt/scripts/application_installer.sh
Enter the following commands to configure the following variables as parameters: RDS_ENDPOINT=<database_endpoint>
SECRET_NAME=<secret_name>
AIX_JSON_FILE_PATH=/opt/codebase/extra/csd_commands/json/aws-mainframe-modernization-carddemo-main/jicsFileAix.json
LISTCAT_JSON_FILES_DIR=/opt/codebase/extra/listcat/json/cluster/default/aws-mainframe-modernization-carddemo-main/app/catlg
S3_PATH_FOR_EBCDIC_DATA_FILES=s3://<bucket_name>/transform-output/<job_id>/inputs/aws-mainframe-modernization-carddemo-main/app/data/EBCDIC
Where: <database_endpoint> is the endpoint of the Amazon RDS database that you deployed through Terraform.
<secret_name> is the name of the AWS Secrets Manager secret that you deployed through Terraform.
<bucket_name> is the name of the Amazon S3 bucket that contains the modernized application.
<job_id> is the ID of the AWS Transform job.
Enter the following command to run the application_installer.sh script: ./application_installer.sh $RDS_ENDPOINT $SECRET_NAME $AIX_JSON_FILE_PATH $LISTCAT_JSON_FILES_DIR $S3_PATH_FOR_EBCDIC_DATA_FILES
In the /opt/tomcat/apache-tomcat-10.0.23/workingdir/config folder, in the application-utility-pgm.yml file, change the encoding parameter to the following: encoding : CP1047
When you refactor applications in AWS Transform automatically by using AWS Blu Age, you configure the application and its runtime environment through YAML files. For example, you can configure logging in the application-main.yml file for the application. For more information about the available properties, see Enable properties for AWS Blu Age Runtime.
| App developer, Cloud architect |
Restart the Tomcat server. | Enter the following command to navigate into the working directory: cd /opt/tomcat/apache-tomcat-10.0.23/workingdir
Enter the following commands to stop and start the Tomcat server: /opt/tomcat/apache-tomcat-10.0.23/bin/shutdown.sh
/opt/tomcat/apache-tomcat-10.0.23/bin/startup.sh
Enter the following command to monitor the Tomcat service startup logs: tail -f /opt/tomcat/apache-tomcat-10.0.23/logs/catalina.out
| App developer, Cloud architect |
Migrate the VSAM dataset. | Open the Amazon EC2 console. In the navigation pane, choose Load balancers. Choose the load balancer that was created through Terraform. Locate the Domain Name System (DNS) name of your Application Load Balancer, such as application-load-balancer-<id>.<region>.elb.amazonaws.com . In your browser, navigate to http://<dns_name>/gapwalk-application/scripts/data-load , where <dns_name> is the DSN name of the Application Load Balancer. This starts the data load script. Wait for the script to complete. When finished, the browser will display DONE. On the Amazon EC2 instance, open a new terminal. Enter the following command to connect to the Amazon RDS database, replacing <database_endpoint> with your value: psql -h <database_endpoint> -U foo -p 5432 postgres
When prompted, enter your credentials to connect to the database. Enter the following command to view all databases: \l
Enter the following command to switch to the bluesam database: \c bluesam
Enter the following command to review a list of the created tables: \dt
Enter the following command to validate the data load: SELECT * FROM public.aws_m2_carddemo_usrsec_vsam_ksds;
The ouput should show 10 records returned.
| App developer, Migration engineer |
Update the parameters in the Groovy scripts. | Enter the following command to navigate into the script directory: cd /opt/tomcat/apache-tomcat-10.0.23/webapps/workingdir/scripts
In all of the Groovy files that contain reference to flat files, update the following file configurations: Local file path – The path to the flat file in the local directory of your Amazon EC2 instance File system type – The file system that contains the flat files Record size – The size of the flat file
For reference, see the sample DUSRSECJ.jcl.groovy script in the code repository. Save and close the files.
| App developer |
Task | Description | Skills required |
---|
Test the modernized application. | Access the online application through the Application Load Balancer (http://<your-load-balancer-dns> ) or through the hosted zone (https://myhostedzone.dev/ ). For the transaction ID, enter CC00 . For the user name, enter USER0001 . For the password, enter PASSWORD . After successful login, the main menu displays.
| App developer, Test engineer |
Verify the batch scripts. | Access the scripts interface through either the Application Load Balancer (http://<your-load-balancer-dns>/gapwalk-application/scripts ) or through the hosted zone (https://myhostedzone.dev/gapwalk-application/scripts ). Choose a script to run, such as the DUSRSECJ.jcl.groovy script. Verify that the script runs successfully. The following is a sample output after successful execution. { "exitCode": 0, "stepName": "STEP03", "program": "IDCAMS", "status": "Succeeded" }
| App developer, Test engineer |
Task | Description | Skills required |
---|
Prepare to delete the infrastructure. | Enter the following command to remove deletion protection from the Amazon RDS instance: aws rds modify-db-instance \
--db-instance-identifier <your-db-instance-name> \
--no-deletion-protection \
--apply-immediately
Enter the following command to remove deletion protection from the Application Load Balancer: aws elbv2 modify-load-balancer-attributes \
--load-balancer-arn <your-load-balancer-arn> \
--attributes Key=deletion_protection.enabled,Value=false
Enter the following commands to delete the contents of the Amazon S3 buckets: ACCOUNT_NUMBER=$(aws sts get-caller-identity --query Account --output text)
aws s3 rm s3://mf-carddemo-$ACCOUNT_NUMBER --recursive
aws s3 rm s3://mf-carddemo-logs-$ACCOUNT_NUMBER --recursive
| General AWS |
Delete the infrastructure. | These steps will permanently delete your resources. Make sure you have backed up any important data before proceeding. Enter the following command to navigate into the infra folder: cd /mainframe-transformation-e2e/infra
Enter the following command to delete the infrastructure: terraform destroy --auto-approve
| General AWS |
Troubleshooting
Issue | Solution |
---|
Terraform authentication | Make sure that the AWS credentials are properly configured. Verify that you have selected the correct AWS profile. Confirm that you have the necessary permissions.
|
Tomcat-related errors | Check catalina.out in /opt/tomcat/apache-tomcat-10.0.23/logs for any exceptions. Enter the following command to change ownership of the Tomcat folder to the Tomcat user: chown -R tomcat:tomcat /opt/tomcat/*
|
URL name not loading | Make sure that the Application Load Balancer security group has your IP address in the inbound rule as a source. |
Authentication issue in Tomcat log | Confirm that the database secret password in AWS Secrets Manager and the password in server.xml match. |
Related resources
AWS Prescriptive Guidance
AWS service documentation
AWS blog posts