Infrastructure automation - Running Containerized Microservices on AWS

Infrastructure automation

Contemporary architectures, whether monolithic or based on microservices, greatly benefit from infrastructure-level automation. With the introduction of virtual machines, IT teams were able to easily replicate environments and create templates of operating system states that they wanted. The host operating system became immutable and disposable. With cloud technology, the idea bloomed and scale was added to the mix. There is no need to predict the future when you can simply provision on demand for what you need and pay for what you use. If an environment isn’t needed anymore, you can shut down the resources. On demand provisioning can be combined with spot compute, which allows you to request unused compute capacity at steep discounts.

One useful mental image for infrastructure-as-code is to picture an architect’s drawing come to life. Just as a blueprint with walls, windows, and doors can be transformed into an actual building, so load balancers, databases, or network equipment can be written in source code and then instantiated.

Microservices not only need disposable infrastructure-as-code, they also need to be built, tested, and deployed automatically. Continuous integration and continuous delivery are important for monoliths, but they are indispensable for microservices. Each service needs its own pipeline, one that can accommodate the various and diverse technology choices made by the team.

An automated infrastructure provides repeatability for quickly setting up environments. These environments can each be dedicated to a single purpose: development, integration, user acceptance testing (UAT) or performance testing, and production. Infrastructure that is described as code and then instantiated can easily be rolled back. This drastically reduces the risk of change and, in turn, promotes innovation and experiments.

The following are the key factors from the twelve-factor app pattern methodology that play a role in evolutionary design:

  • Codebase (one codebase tracked in revision control, many deploys) – Because the infrastructure can be described as code, treat all code similarly and keep it in the service repository.

  • Config (store configurations in the environment) – The environment should hold and share its own specificities.

  • Build, release, run (strictly separate build and run stages) – One environment for each purpose.

  • Disposability (maximize robustness with fast startup and graceful shutdown) – This factor transcends the process layer and bleeds into such downstream layers as containers, virtual machines, and virtual private cloud.

  • Dev/prod parity – Keep development, staging, and production as similar as possible.

Successful applications use some form of infrastructure-as-code. Resources such as databases, container clusters, and load balancers can be instantiated from description.

Within AWS, the unified AWS API provides a helpful foundation for the implementation of Infrastructure as Code (IaC) to automate the resource deployment for the containerized microservices. Native AWS service offerings like AWS CloudFormation and offerings from partners like HashiCorp Terraform or AWS open-source projects like AWS Cloud Development Kit (AWS CDK) aggregate the AWS API into a toolset helping teams to provision resources in a structured, easy to maintain format. Open-source projects around Amazon ECS and Amazon EKS Blueprints also ensure best-practice aligned configuration for the underlying infrastructure of containerized microservice architectures.

To wrap the application with a CI/CD pipeline, you should choose a code repository, an integration pipeline, an artifact-building solution, and a mechanism for deploying these artifacts. A microservice should do one thing and do it well. This implies that when you build a full application, there will potentially be a large number of services. Each of these need their own integration and deployment pipeline. Keeping infrastructure automation in mind, architects who face this challenge of proliferating services will be able to find common solutions and replicate pipelines that have made a particular service successful. An image repository should be used in the CI/CD pipeline to push the containerized image of the microservice. Various popular image repositories such as Amazon ECR, Redhat Quay, Docker Hub, and JFrog Container registries can be used as part of the infrastructure automation.

As previously described in the Decentralized Governance section, GitOps is a popular operational framework for achieving Continuous Delivery. Git is used as single source of truth for deploying into your cluster. Additionally, tools such as Flux, ArgoCD or Spinnaker run in your cluster and implement changes based on monitoring Git and image repositories. As one example, Flux keeps an eye on image repositories, detects your new images, and updates the running configurations based on a configurable policy. In general, Continuous Delivery (CD) tools can be leveraged for immediate autonomous deployment to production environments.

Ultimately, the goal is to enable developers to push code updates and have the updated application sent to multiple environments in minutes. There are many ways to successfully deploy in phases, including the blue/green and canary methods. With the blue/green deployment, two environments live side by side, with one of them running a newer version of the application. Traffic is sent to the older version until a switch happens that routes all traffic to the new environment. You can see an example of this happening in this reference architecture:



Blue/green deployment

Blue/green deployment

In this case, we use a switch of target groups behind a load balancer in order to redirect traffic from the old to the new resources. Another way to achieve this is to use services fronted by two load balancers and operate the switch at the DNS level.