Build an AWS landing zone that includes MongoDB Atlas - AWS Prescriptive Guidance

Build an AWS landing zone that includes MongoDB Atlas

Igor Alekseev, Amazon Web Services

Summary

This pattern describes how to build an AWS landing zone that’s integrated with a MongoDB Atlas cluster. The infrastructure is automatically deployed by using a Terraform script.

A well-structured, multi-account AWS environment, which is called a landing zone, offers scalability and security, particularly for enterprises. It serves as a foundation for rapid deployment of workloads and applications, and helps ensure confidence in security and infrastructure. Building a landing zone requires careful consideration of technical and business factors, including account structure, networking, security, and access management. These considerations should be aligned with your organization's future growth and business objectives.

The use cases for this pattern include the following.

  • Enterprise SaaS and PaaS platforms: Multitenant software as a service (SaaS) applications and platform as a service (PaaS) platforms that run on AWS can use this setup to help provide secure, private access to MongoDB Atlas without exposing data over the public internet.

  • Highly regulated industries: Banking, financial services, healthcare, and government workloads that require strict compliance with standards such as Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry Data Security Standard (PCI DSS), System and Organization Controls 2 (SOC2), and General Data Protection Regulation (GDPR) benefit from:

    • Encrypted, private connectivity through AWS PrivateLink

    • Multi-AZ high availability of MongoDB replica sets

  • Secure AI/ML workloads: Training or inference pipelines in Amazon Bedrock, Amazon SageMaker AI, or custom AI models can securely fetch and store data in MongoDB Atlas over PrivateLink.

  • Disaster recovery and business continuity: Multi-AZ design ensures that no single Availability Zone failure disrupts workloads. An Atlas replica set across Availability Zones ensures automatic failover. This is critical for always-on services such as financial technology (fintech) apps, digital banking, or healthcare monitoring.

Prerequisites and limitations

Prerequisites

  • Organization owner access to MongoDB Atlas so you can create Atlas API keys. For information about this requirement, see Manage Organization Access in the MongoDB documentation.

  • An active AWS account.

  • Terraform, installed and configured.

  • A MongoDB Atlas cluster, created with MongoDB version 6.0 or later.

  • Familiarity with MongoDB and MongoDB Atlas. For more information, see the MongoDB Atlas documentation.

Limitations

Architecture

The following reference architecture diagram illustrates the deployment setup for an AWS landing zone that’s integrated with a MongoDB Atlas private endpoint. This reference architecture demonstrates how to establish a secure, scalable, and highly available AWS landing zone integrated with MongoDB Atlas. By combining AWS best practices such as Multi-AZ deployment, least-privilege security controls, and private connectivity, this design enables organizations to provision a robust environment for modern applications.

Multi-AZ architecture for AWS landing zone that's integrated with MongoDB Atlas.

This architecture consists of the following:

VPC

  • A single virtual private cloud (VPC) spans three Availability Zones.

  • The VPC is subdivided into subnets that are aligned to each Availability Zone. These subnets distribute workloads for high availability.

Internet access

  • An internet gateway provides outbound internet connectivity for resources that need it, such as application or bastion hosts.

  • Public subnets can house NAT gateways, which allow private subnet workloads to download updates, patches, and other required packages without exposing them directly to the public internet.

Private subnets and route tables

  • Application components, microservices, or other sensitive resources typically reside in private subnets.

  • Dedicated route tables control traffic flows. Routes direct outbound traffic from private subnets to NAT gateways for secure, egress-only internet access.

  • Inbound requests from the internet flow through elastic load balancers or bastion hosts (if used) in public subnets, and then route appropriately to private subnet resources.

MongoDB Atlas connectivity through PrivateLink

  • The architecture uses PrivateLink (through a VPC endpoint) to securely connect to MongoDB Atlas without exposing your data to the public internet.

  • Requests remain on the AWS backbone network. Data in transit benefits from PrivateLink encryption and is never routed over the public internet.

  • The MongoDB Atlas dedicated VPC hosts your primary and secondary nodes, and provides a secure, isolated environment for your managed database cluster.

Multi-AZ deployment

  • Critical infrastructure components (such as NAT gateways and application subnets) are distributed across at least three Availability Zones. If an Availability Zone experiences an outage, this architecture ensures that workloads in the remaining Availability Zones remain operational.

  • MongoDB Atlas, by default, offers high availability through replica sets and ensures that your database layer remains fault-tolerant. Critical infrastructure is spread across at least three Availability Zones for resilience.

Tools

AWS services

  • AWS Secrets Manager helps you replace hardcoded credentials in your code, including passwords, with an API call to retrieve the secret programmatically.

Other products and tools

  • MongoDB Atlas is a fully managed database as a service (DbaaS) for deploying and managing MongoDB databases in the cloud.

  • Terraform is an infrastructure as code (IaC) tool from HashiCorp that helps you create and manage cloud and on-premises resources. In this pattern, you use Terraform to run a script to facilitate the deployment of required resources on AWS and MongoDB Atlas.

Code repository

The code for this pattern is available in the AWS and MongoDB Atlas Landing Zone GitHub repository.

Epics

TaskDescriptionSkills required

Identify key stakeholders.

Identify all key stakeholders and team members who are involved in your landing zone project. This could include roles such as:

  • Database administrators (DBAs)

  • DevOps engineers

  • Application developers

  • Application architects

Migration lead

Create a structural blueprint.

Create a blueprint that outlines the desired structure of your AWS and MongoDB Atlas-enabled landing zone.

Migration lead

Create an architecture plan.

Work with your application architects to analyze requirements and design a fault-tolerant, resilient architecture. This pattern provides a starter architecture template for your reference. You can customize this template to meet your organization's security and infrastructure needs.

Cloud architect

Plan for setup and deployment.

Determine, with all stakeholders, how the architecture will be deployed, how security measures will be implemented, and any other aspects to ensure alignment with both the organization's and the requesting team's interests.

Migration lead, DevOps engineer, DBA
TaskDescriptionSkills required

Clone the repository.

Clone the code from the GitHub repository by running the command:

git clone https://github.com/mongodb-partners/AWS-MongoDB-Atlas-Landing-Zone
App developer, DevOps engineer

Get your Atlas organization ID.

  1. If you don't have a MongoDB Atlas account, sign up for one.

  2. Follow the steps in the MongoDB documentation to create an organization.

  3. Copy the organization ID.

DBA

Generate Atlas organization-level API keys.

To generate your organization-level API keys in Atlas, follow the instructions in the MongoDB documentation.

DBA

Create a secret in AWS Secrets Manager.

Store the MongoDB Atlas API keys generated in the previous step as a key-value secret in Secrets Manager. For instructions, see the Secrets Manager documentation.

DevOps engineer

Select the Atlas cluster tier.

To select the correct Atlas cluster tier, follow the instructions in the MongoDB documentation.

DBA
TaskDescriptionSkills required

Modify the Terraform script.

In your local copy of the GitHub repository, update the secret name in the modules/mongodb-atlas/main.tf file (line 12), so Terraform can retrieve the credentials from Secrets Manager during deployment.

DevOps engineer

Create an AWS access key ID and secret key.

To create your AWS access key ID and secret key, follow the instructions in the AWS re:Post article How do I create an AWS access key?

It is best practice to assign policies with the least privileges necessary, but for this case, select the AdministratorAccess policy.

After you create your access key, review Security best practices in IAM to learn about best practices for managing access keys.

DevOps engineer

Allocate Elastic IP addresses.

Allocate at least two Elastic IP address IDs. For instructions, see the Amazon Virtual Private Cloud (Amazon VPC) documentation.

DevOps engineer

Create an S3 bucket.

Create an S3 bucket to store the state of your Terraform deployment by following the instructions in the Amazon Simple Storage Service (Amazon S3) documentation.

DevOps engineer

Update the S3 bucket for storage.

Update the S3 bucket information in your local version of environments/development/main.tf to match the name and Region of the bucket you created in the previous step, and specify a key prefix. For example:

terraform { ... backend "s3" { bucket = "startup-name-product-terraform" key = "network/dev" region = "ap-southeast-1" } }

For this example, you can configure Terraform to use the key prefix network/dev to organize the Terraform state file. You can change the value to prod or staging to match the environment you want to create. For information about using multiple environments, see the last step in this section.

For more information about Amazon S3 key prefixes, see Organizing objects using prefixes in the Amazon S3 documentation.

DevOps engineer

Set Terraform variables.

The sample landing zone defines input variable values by using Terraform variable definition files.

The variable file is located at environments/development/variables.tf. You can set the variable values in the environments/development/terraform.tfvars file. Configure these variables as described in the Readme file for the GitHub repository.

DevOps engineer

Set up environment variables.

If you are planning to run the Terraform script on your local machine, set up the following environment variables:

  • AWS_ACCESS_KEY_ID: AWS access key ID

  • AWS_SECRET_ACCESS_KEY: AWS secret access key

  • AWS_DEFAULT_REGION: AWS Region

  • TF_LOG: Terraform log level (DEBUG or INFO)

For more information about setting up environment variables, see the AWS Command Line Interface (AWS CLI) documentation.

DevOps engineer

Check VPC configurations.

To follow the best practices recommended by AWS, configure the settings for VPC and subnet CIDRs, NAT gateways, routes, and route tables in the Terraform script to meet your organization’s needs. For specifics, see the Readme file for the GitHub repository.

DevOps engineer

Tag resources.

You can tag your AWS resources to monitor them when they’re deployed by the Terraform script. For examples, see the Readme file for the GitHub repository. For information about monitoring resources through tags for cost, usage, and so on, see Activating user-defined cost allocation tags in the AWS Billing documentation.

DevOps engineer

Use multiple environments.

The GitHub repository provides a development environment folder. You can also add your own environments in the environment folder.

To add an environment, copy the development folder to a new folder (for example, prod or staging) under environments. You can then update the terraform.tfvars file with the new value.

DevOps engineer
TaskDescriptionSkills required

Initialize the Terraform working directory.

To initialize the working directory and download the necessary packages, run the command:

terraform init
DevOps engineer

Create an execution plan.

To create an execution plan and visualize the changes that Terraform will make to your infrastructure, run the command:

terraform plan
DevOps engineer

Deploy the changes.

To implement the changes to your infrastructure as described in the code, run the command:

terraform apply
DevOps engineer

Validate the deployment.

Validate the components that Terraform created or modified in your infrastructure.

To test the setup, provision a compute resource (for example, an Amazon EC2 instance or AWS Lambda function) in or attached to the VPC.

DevOps engineer, App developer
TaskDescriptionSkills required

Clean up.

When you have finished testing, run the following command to destroy the resources that Terraform deployed in your infrastructure:

terraform destroy
DevOps engineer

Related resources

Discovery and assessment

Setting up MongoDB Atlas and AWS environments

Deploying the landing zone