Workloads OU – Application account - AWS Prescriptive Guidance

Workloads OU – Application account

The following diagram illustrates the AWS security services that are configured in the Application account (along with the application itself).


        Security services for Application account

The Application account hosts the primary infrastructure and services to run and maintain an enterprise application. The Application account and Workloads OU serve a few primary security objectives. First, you create a separate account for each application to provide boundaries and controls between workloads so that you can avoid issues of comingling roles, permissions, data, and encryption keys. You want to provide a separate account container where the application team can be given broad rights to manage their own infrastructure without affecting others. Next, you add a layer of protection by providing a mechanism for the security operations team to monitor and collect security data. Employ an organization trail and local deployments of account security services (Amazon GuardDuty, AWS Config, AWS Security Hub, Amazon EventBridge, AWS IAM Access Analyzer), which are configured and monitored by the security team. Finally, you enable your enterprise to set controls centrally. You align the application account to the broader security structure by making it a member of the Workloads OU through which it inherits appropriate service permissions, constraints, and guardrails.

Application VPC

The virtual private cloud (VPC) in the Application account needs both inbound access (for the simple web services that you are modeling) and outbound access (for application needs or AWS service needs). By default, all resources inside a VPC are routable to each other. There are two private subnets: one to host the Amazon Elastic Compute Cloud (Amazon EC2) instances (application layer) and the other for Amazon Aurora (database layer). Network segmentation between different tiers, such as the application tier and database tier, is accomplished through VPC security groups, which restrict traffic at the instance level. For resiliency, the workload spans two or more Availability Zones and utilizes two subnets per zone.

Design consideration

You can use Traffic Mirroring to copy network traffic from an elastic network interface of EC2 instances. You can then send the traffic to out-of-band security and monitoring appliances for content inspection, threat monitoring, or troubleshooting. For example, you might want to monitor the traffic that is leaving your VPC or the traffic whose source is outside your VPC. In this case, you will mirror all traffic except for the traffic passing within your VPC and send it to a single monitoring appliance. VPC flow logs do not capture mirrored traffic; they generally capture information from packet headers only. Traffic Mirroring provides deeper insight into the network traffic by allowing you to analyze actual traffic content, including payload. Enable Traffic Mirroring only for the elastic network interface of EC2 instances that might be operating as part of sensitive workloads or for which you expect to need detailed diagnostics in the event of an issue.

VPC endpoints

VPC endpoints provide another layer of security control as well as scalability and reliability. Use these to connect your application VPC to other AWS services. (In the Application account, the AWS SRA employs VPC endpoints for AWS KMS, AWS Systems Manager, and Amazon S3.) Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components. They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic. You can use a VPC endpoint to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with other AWS services. Traffic between your VPC and the other AWS service does not leave the Amazon network.

Another benefit of using VPC endpoints is to enable the configuration of endpoint policies. A VPC endpoint policy is an IAM resource policy that you attach to an endpoint when you create or modify the endpoint. If you do not attach an IAM policy when you create an endpoint, AWS attaches a default IAM policy for you that allows full access to the service. An endpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket policies). It is a separate IAM policy for controlling access from the endpoint to the specified service. In this way, it adds another layer of control over which AWS principals can communicate with resources or services.

Amazon EC2

The EC2 instances that compose our application make use of version 2 of the Instance Metadata Service (IMDSv2). IMDSv2 adds protections for four types of vulnerabilities that could be used to try to access the IMDS: website application firewalls, open reverse proxies, server-side request forgery (SSRF) vulnerabilities, open layer 3 firewalls, and NATs. For more information, see the blog post Add defense in depth against open firewalls, reverse proxies, and SSRF vulnerabilities with enhancements to the EC2 Instance Metadata Service.

Note

Although in practice, AWS Systems Manager Session Manager and Amazon Inspector agents are deployed on the EC2 instances, the Workloads OU diagram shows them separately for ease of readability.

Application Load Balancers

Application Load Balancers distribute incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. In the AWS SRA, the target group for the load balancer are the application EC2 instances. The AWS SRA uses HTTPS listeners to ensure that the communication channel is encrypted. The Application Load Balancer uses a server certificate to terminate the front-end connection, and then to decrypt requests from clients before sending them to the targets.

AWS Certificate Manager (ACM) natively integrates with Application Load Balancers, and the AWS SRA uses ACM to generate and manage the necessary X.509 (SSL/TLS server) certificates. You can enforce TLS 1.2 and strong ciphers for front-end connections through the Application Load Balancer security policy. For more information, see the Elastic Load Balancing documentation.

Design considerations
  • You can alternatively use SSL/TLS tools to create a certificate signing request (CSR), get the CSR signed by a certificate authority (CA) to produce a certificate, and then import the certificate into ACM or upload the certificate to IAM for use with the Application Load Balancer. If you import a certificate into ACM, you must monitor the expiration date of the certificate and renew it before it expires. If you import a certificate into IAM, you must create a new certificate, import the new certificate to ACM or IAM, add the new certificate to your load balancer, and remove the expired certificate from your load balancer. In addition to enforcing the HTTPS connection, you can configure the load balancer to securely authenticate users as they access your backend application. For identity management you can use an identity provider (IdP) that is OpenID Connect (OIDC) compliant, use social IdPs, or authenticate users through corporate identities, using SAML 2.0 (Security Assertion Markup Language 2.0), Lightweight Directory Access Protocol (LDAP), or Microsoft AD. For more information, see the Elastic Load Balancing documentation.

  • For additional layers of defense, you can deploy AWS WAF policies to protect the Application Load Balancer. Having edge policies, application policies, and even private or internal policy enforcement layers adds to the visibility of communication requests and provides unified policy enforcement. For more information, see the blog post Deploying defense in depth using AWS Managed Rules for AWS WAF.

Amazon Inspector

Amazon Inspector implements two types of detective controls, which test the network accessibility of your EC2 instances and the security state of your applications that run on those instances.

The Network Reachability rules package of Amazon Inspector assesses the accessibility of your EC2 instances to or from the internet. These rules help automate the monitoring of your AWS networks and identify where network access to your EC2 instances might be misconfigured. The findings show whether your EC2 instance ports are reachable from the internet. These findings also highlight network configurations that allow for potentially malicious access, such as mismanaged security groups, access control lists (ACLs), internet gateways, and so on. For more information, see the Amazon Inspector documentation.

By installing the Amazon Inspector agent, you can further assess the EC2 host itself for exposure to common vulnerabilities and exposures (CVEs), alignment to Center for Internet Security (CIS) benchmarks, and alignment with AWS security best practices. For more information, see the Amazon Inspector documentation.

Design considerations
  • Amazon Inspector integrates with AWS Security Hub if both services are enabled in the same AWS account. You can use this integration to send all findings from Amazon Inspector to Security Hub, which will then include those findings in its analysis of your security posture.

  • The Amazon Inspector agent initiates all communication with the Amazon Inspector service. This means that the agent must have an outbound network path to public AWS endpoints so that it can send telemetry data. The agent periodically communicates with Amazon Inspector over a TLS-protected channel, which is authenticated in one of two ways: by using the AWS identity associated with the role of the EC2 instance, or, if no role is assigned, by using the AWS identity associated with the instance’s metadata document.

AWS Systems Manager

AWS Systems Manager is an AWS service that you can use to view operational data from multiple AWS services and automate operational tasks across your AWS resources. With automated approval workflows and runbooks, you can reduce human error and simplify maintenance and deployment tasks on AWS resources.

In addition to these general automation capabilities, Systems Manager supports a number of preventive, detective, and responsive security features. Systems Manager Agent (SSM Agent) is Amazon software that can be installed and configured on an EC2 instance, an on-premises server, or a virtual machine (VM). SSM Agent makes it possible for Systems Manager to update, manage, and configure these resources. SSM helps you maintain security and compliance by scanning these managed instances and reporting (or taking corrective action) on any violations it detects in your patch, configuration, and custom policies.

The AWS SRA uses AWS Systems Manager Session Manager to provide an interactive, browser-based shell and CLI experience. This provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. The AWS SRA uses AWS Systems Manager Patch Manager to apply patches to EC2 instances for both operating systems and applications.

Design considerations
  • Systems Manager relies on EC2 instance metadata to function correctly. Systems Manager can access instance metadata by using either version 1 or version 2 of the Instance Metadata Service (IMDSv1 and IMDSv2).

  • SSM Agent has to communicate with different AWS services and resources such as EC2 messages, Systems Manager, and Amazon S3. For this communication to happen, the subnet requires either outbound internet connectivity or provisioning of appropriate VPC endpoints. The AWS SRA uses the endpoints.

Amazon Aurora

In the AWS SRA, Amazon Aurora and Amazon S3 make up the logical data tier. Aurora is a fully managed relational database engine that's compatible with MySQL and PostgreSQL. An application that is running on the EC2 instances communicates with Aurora and Amazon S3 as needed. Aurora is configured with a database cluster inside a DB subnet group.

Design consideration

As in many database services, security for Aurora is managed at three levels. To control who can perform Amazon Relational Database Service (Amazon RDS) management actions on Aurora DB clusters and DB instances, you use IAM. To control which devices and EC2 instances can open connections to the cluster endpoint and port of the DB instance for Aurora DB clusters in a VPC, you use a VPC security group. To authenticate logins and permissions for an Aurora DB cluster, you can take the same approach as with a stand-alone DB instance of MySQL or PostgreSQL, or you can use IAM database authentication for Aurora MySQL-Compatible Edition. With this latter approach, you authenticate to your Aurora MySQL-Compatible DB cluster by using an IAM role and an authentication token.

Amazon S3

Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. It is the data backbone of many applications built on AWS, and appropriate permissions and security controls are critical for protecting sensitive data. For recommended security best practices for Amazon S3, see the documentation, online tech talks, and deeper dives in blog posts. The most important best practice is to block overly permissive access (especially public access) to S3 buckets.

AWS KMS

AWS Key Management Service (AWS KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications. By defining an encryption approach that includes the storage, rotation, and access control of keys, you can help provide protection for your content against unauthorized users and against unnecessary exposure to authorized users. AWS KMS is a secure and resilient service that uses hardware security modules. AWS KMS keys are the primary resources in AWS KMS. A KMS key is a logical representation of an encryption key. For protection and flexibility, AWS KMS supports three types of KMS keys: customer managed keys, AWS managed keys, and AWS owned keys. Customer managed keys are keys in your AWS account that you create, own, and manage. AWS managed keys are keys in your account that are created, managed, and used on your behalf by an AWS service that is integrated with AWS KMS. AWS owned keys are a collection of keys that an AWS service owns and manages for use in multiple AWS accounts. For more information about using KMS keys, see the AWS KMS documentation and the AWS Key Management Service Best Practices technical paper.

Keep KMS keys in the same accounts and AWS Regions as the data encryption keys that they encrypt. In the Application account, AWS KMS is used to manage keys that are specific to the application, and permissions can be granted to local application roles as well as to appropriate security teams or administrators for some separation of duties. In the Security Tooling account, AWS KMS is used to manage the encryption of centralized security services such as the AWS CloudTrail organization trail that is managed by the AWS organization.

Design considerations
  • You have a choice for how to deploy AWS KMS. You can allow the workload teams that handle customer data to create, manage, and use KMS keys in the AWS accounts that they manage. This model affords the workload teams more control, flexibility, and agility over the use of encryption keys. Alternatively, you can separate the responsibility for the creation and management of KMS keys into a centralized security account while delegating only the ability to use the keys to the workload teams. This second option facilitates better separation of responsibilities and prevents the workload teams from accidentally deleting or escalating privilege on KMS keys without also involving the security account.

  • You should use appropriate monitoring and detective controls for additional security layers. AWS KMS is integrated with AWS CloudTrail and AWS Config. CloudTrail provides you with logs of all key usage to help meet your regulatory and compliance needs. AWS Config monitors and records all changes in your KMS keys and the associated KMS key policies (IAM resource policies).

AWS CloudHSM

AWS CloudHSM provides managed hardware security modules (HSMs) in the AWS Cloud. It enables you to generate and use your own encryption keys on AWS by using FIPS 140-2 level 3 validated HSMs that you control access to. You can use AWS CloudHSM to offload SSL/TLS processing for your web servers. This reduces the burden on the web server and provides extra security by storing the web server's private key in AWS CloudHSM. You could similarly deploy an HSM from AWS CloudHSM in the inbound VPC in the Network account to store your private keys and sign certificate requests if you need to act as an issuing certificate authority.

Design consideration

If you have a hard requirement for FIPS 140-2 level 3, you can also choose to configure AWS KMS to use the AWS CloudHSM cluster as a custom key store rather than using the native KMS key store. By doing this, you benefit from the integration between AWS KMS and AWS services that encrypt your data, while being responsible for the HSMs that protect your KMS keys. This combines single-tenant HSMs under your control with the ease of use and integration of AWS KMS. To manage your AWS CloudHSM infrastructure, you should employ a public key infrastructure (PKI) or security team that has experience managing HSMs.

ACM Private CA

AWS Certificate Manager Private Certificate Authority (ACM Private CA) provisions and deploys a private TLS certificate that will be exported and used on the Application Load Balancer at the front end of the application EC2 instances. This allows encrypted TLS communication to the running applications. With ACM Private CA, you can create your own CA hierarchy and issue certificates with it for authenticating internal users, computers, applications, services, servers, and other devices, and for signing computer code. Certificates issued by a private CA are trusted only within your AWS organization, not on the internet. A PKI or security team should be responsible for managing all PKI infrastructure. This includes the management and creation of the private CA. However, creation of private certificates from the private CA can be delegated to application development teams if appropriate. For additional uses of ACM, see the Network account section earlier in this document.

AWS Secrets Manager

AWS Secrets Manager helps you protect the credentials (secrets) that you need to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can replace hardcoded credentials in your code with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure that the secret can't be compromised by someone who is examining your code, because the secret no longer exists in the code. With Secrets Manager, you can manage access to secrets by using fine-grained IAM policies and resource-based policies. You can help secure secrets by encrypting them with encryption keys that you manage by using AWS KMS. Secrets Manager also integrates with AWS logging and monitoring services for centralized auditing.

In the AWS SRA, Secrets Manager is located in the Application account to support local application use cases and to manage secrets close to their usage. In this example, an instance profile is attached to the EC2 instances in the Application account. Separate secrets can then be configured in Secrets Manager to allow that instance profile to retrieve secrets—for example, to join the appropriate Active Directory or LDAP domain and to access the Aurora database.

Design considerations
  • In general, configure and manage Secrets Manager in the account that is closest to where the secrets will be used. This approach takes advantage of the local knowledge of the use case and provides speed and flexibility to application development teams. For tightly controlled information, where an additional layer of control may be appropriate, secrets can be centrally managed by Secrets Manager in the Security Tooling account.

  • AWS Config can provide detective controls on these secrets. For example, it can track and monitor changes to secrets in Secrets Manager, such as the secret’s description, rotation configuration, tags, the secret’s relationship to other AWS sources such as the KMS encryption key or the AWS Lambda functions used for secret rotation. You can also configure Amazon EventBridge, which receives all configuration and compliance change notifications from AWS Config, to route particular secrets events for notification or remediation actions.