Step 1. Evaluate your applications - AWS Prescriptive Guidance

Step 1. Evaluate your applications

The goals of this phase are to:

  • Thoroughly understand your application landscape and prepare your applications for modern data platforms, so you can accelerate the time to value without impacting your business, and then modernize, optimize, and scale.

  • Profile your application landscape to identify the benefits, risks, and costs associated with change.

  • Provide an end-to-end set of services: from strategy and planning; through deployment, migration, and application modernization; to ongoing support.

  • Build policies, recommendations, and controls that provide reusable practices and tools to deliver ongoing business value.

In the evaluation phase, application owners and architects use a modernization diagnostic playbook to validate their modernization goals and priorities.

Using the modernization diagnostic playbook

A modernization diagnostic playbook provides a process for determining the value of moving from the current state to the future state for the enterprise. This is inclusive of technology changes that modernization involves.

You use the diagnostic playbook to determine the priority of your application or application suite for cloud modernization, and to identify the components that need to addressed during modernization.

Diagnostic dimensions

The modernization diagnostics playbook helps you understand the following dimensions of the current and target (post-migration) state of an application or a group of applications:

  • Application grouping – Is there a reason to group applications (for example, by technology or operating model) for modernization?

  • Sequencing – Is there an order in which applications should be modernized, based on dependencies?

  • Technology – What are the technology categories (for example, middleware, database, messaging)?

  • Dependencies – Do the applications have key dependencies on other systems or middleware?

  • Environments – How many development, testing, and production environments are used?

  • Storage – What are storage requirements (for example, the number of copies of the test data)?

  • Operating model – Can all components of the application adopt a continuous integration and continuous delivery (CI/CD) pipeline?

    • If so, what infrastructure responsibilities should be distributed to application teams and to whom?

    • If not, what infrastructure responsibilities (for example, patching) should remain with a operations team?

  • Delivery model:

    • Based on the application or group of applications, should you replatform, refactor, rewrite, or replace?

    • Which portion of the modernization should use cloud-native services?

  • Skill sets – What expertise is required? For example:

    • A cloud application background to build applications with a modular architecture by using container and serverless technologies from the ground up.

    • DevOps expertise to develop solutions in the areas of CI/CD processes, infrastructure as code, and automation or application observability by using open-source and AWS tools and services.

  • Modernization approach – Considering the current state of the applications, cloud technology choices, current technical debt, CI/CD, monitoring, skills, and operating model, what is the technical migration work that needs to be done?

  • Modernization timing – What are the business portfolio timing considerations or other planned work considerations that might affect modernization timing?

  • Unit and total cost of infrastructure – What is the annual cost of maintaining your workload on premises vs. on AWS, based on economic analysis?

Evaluating applications against these dimensions help you stay anchored in business, technology, and economics as you drive your modernization to the cloud.

Building blocks

When you’re modernizing applications, you can classify your observations into three building blocks: business agility, organizational agility, and engineering effectiveness.

  • Business agility – Practices that concern the effectiveness within the business to translate business needs into requirements. How responsive the delivery organization is to business requests, and how much control the business has in releasing functionality into production environments.

  • Organizational agility – Practices that define delivery processes. Examples include agile methodology and DevOps ceremonies as well as role assignment and clarity, and overall collaboration, communication, and enablement across the organization.

  • Engineering effectiveness – Development practices related to quality assurance, testing, CI/CD, configuration management, application design, and source code management.

Identifying metrics

To learn if you are delivering what matters to your customers, you must implement measures that drive improvement and accelerate delivery. Goal, question, metric (GQM) provides an effective framework for ensuring that your measures meet these criteria. Use this framework to work back from your goals by following these steps:

  1. Identify the goal or outcome that you are undertaking.

  2. Derive the questions that must be answered to determine whether the goal is being met.

  3. Decide what should or could be measured to answer the questions adequately. There are two categories of measures:

    • Product metrics, which ensure that you are delivering what matters to your customers.

    • Operational metrics, which ensure that you are improving your software delivery lifecycle.

Product metrics

Product metrics focus on business outcomes and should be established when the return on investment (ROI) for a new scope of work is determined. A useful technique for establishing a product metric is to ask what will change in the business when that new scope of work is implemented. It’s helpful to formalize this thinking into the form of a test that focuses on what would be true when a modernization feature is delivered.

For example, if you believe that migrating transactions out of legacy systems will unlock new opportunities to onboard clients, what is the improvement? How much capacity has to be created to onboard the next client? How would a test be constructed to validate that outcome? For this scenario, your product metrics might include the following:

  • Identify the business value test or hypothesis (for example, freeing x percent of transaction capacity will onboard y percent of new business).

  • Establish the baseline (for example, the current capacity of x transactions supports y customers).

  • Validate the outcome (for example, you have improved capacity by x percent, so can you now onboard y percent new business?)

Operational metrics

To determine whether you are improving your software delivery lifecycle and accelerating your modernization, you must know your lead time and implementation time for delivering software. That is, how quickly can you convert a business need into functionality in production?

Useful operational metrics include:

  • Lead time – How much time does it take for a scope of work to go from request to production?

  • Cycle time – How long does it take to implement a scope of work, from start to finish?

  • Deployment frequency – How often do you deploy changes to production?

  • Time to restore service – How long does it take to recover from failure (measured as the mean time to repair or MTTR)?

  • Change failure rate – What is the mean time between failures (MTBF)?