Microservices - Implementing Microservices on AWS

Microservices

APIs are the front door of microservices, which means that APIs serve as the entry point for applications logic behind a set of programmatic interfaces, typically a RESTful web services API. This API accepts and processes calls from clients, and might implement functionality such as traffic management, request filtering, routing, caching, authentication, and authorization.

Microservices implementation

AWS has integrated building blocks that support the development of microservices. Two popular approaches are using AWS Lambda and Docker containers with AWS Fargate.

With AWS Lambda, you upload your code and let Lambda take care of everything required to run and scale the implementation to meet your actual demand curve with high availability. No administration of infrastructure is needed. Lambda supports several programming languages and can be invoked from other AWS services or be called directly from any web or mobile application. One of the biggest advantages of AWS Lambda is that you can move quickly: you can focus on your business logic because security and scaling are managed by AWS. Lambda’s opinionated approach drives the scalable platform.

A common approach to reduce operational efforts for deployment is container-based deployment. Container technologies like Docker have increased in popularity in the last few years due to benefits like portability, productivity, and efficiency. The learning curve with containers can be steep and you have to think about security fixes for your Docker images and monitoring. Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS) eliminate the need to install, operate, and scale your own cluster management infrastructure. With API calls, you can launch and stop Docker-enabled applications, query the complete state of your cluster, and access many familiar features like security groups, Load Balancing, Amazon Elastic Block Store (Amazon EBS) volumes, and AWS Identity and Access Management (IAM) roles.

AWS Fargate is a serverless compute engine for containers that works with both Amazon ECS and Amazon EKS. With Fargate, you no longer have to worry about provisioning enough compute resources for your container applications. Fargate can launch tens of thousands of containers and easily scale to run your most mission-critical applications.

Amazon ECS supports container placement strategies and constraints to customize how Amazon ECS places and ends tasks. A task placement constraint is a rule that is considered during task placement. You can associate attributes, which are essentially key-value pairs, to your container instances and then use a constraint to place tasks based on these attributes. For example, you can use constraints to place certain microservices based on instance type or instance capability, such as GPU-powered instances.

Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use all the existing plugins and tooling from the Kubernetes community. Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises data centers or public clouds. Amazon EKS integrates IAM with Kubernetes, enabling you to register IAM entities with the native authentication system in Kubernetes. There is no need to manually set up credentials for authenticating with the Kubernetes control plane. The IAM integration enables you to use IAM to directly authenticate with the control plane itself and provide fine granular access to the public endpoint of your Kubernetes control plane.

Docker images used in Amazon ECS and Amazon EKS can be stored in Amazon Elastic Container Registry (Amazon ECR). Amazon ECR eliminates the need to operate and scale the infrastructure required to power your container registry.

Continuous integration and continuous delivery (CI/CD) are best practices and a vital part of a DevOps initiative that enables rapid software changes while maintaining system stability and security. However, this is out of scope for this whitepaper. For more information, refer to the Practicing Continuous Integration and Continuous Delivery on AWS whitepaper.

AWS PrivateLink is a highly available, scalable technology that enables you to privately connect your virtual private cloud (VPC) to supported AWS services, services hosted by other AWS accounts (VPC endpoint services), and supported AWS Marketplace partner services. You do not require an internet gateway, network address translation device, public IP address, AWS Direct Connect connection, or VPN connection to communicate with the service. Traffic between your VPC and the service does not leave the Amazon network.

Private links are a great way to increase the isolation and security of microservices architecture. A microservice, for example, could be deployed in a totally separate VPC, fronted by a load balancer, and exposed to other microservices through a AWS PrivateLink endpoint. With this setup, using AWS PrivateLink , the network traffic to and from the microservice never traverses the public internet. One use case for such isolation includes regulatory compliance for services handling sensitive data such as PCI, HIPPA and EU/US Privacy Shield. Additionally, AWS PrivateLink allows connecting microservices across different accounts and Amazon VPCs, with no need for firewall rules, path definitions, or route tables; simplifying network management. Utilizing PrivateLink, software as a service (SaaS) providers, and ISVs can offer their microservices-based solutions with complete operational isolation and secure access, as well.