Performing a proof of concept with Amazon Aurora
Following, you can find an explanation of how to set up and run a proof of concept for Aurora. A proof of concept is an investigation that you do to see if Aurora is a good fit with your application. The proof of concept can help you understand Aurora features in the context of your own database applications and how Aurora compares with your current database environment. It can also show what level of effort you need to move data, port SQL code, tune performance, and adapt your current management procedures.
In this topic, you can find an overview and a step-by-step outline of the high-level procedures and decisions involved in running a proof of concept, listed following. For detailed instructions, you can follow links to the full documentation for specific subjects.
Overview of an Aurora proof of concept
When you conduct a proof of concept for Amazon Aurora, you learn what it takes to port your existing data and SQL applications to Aurora. You exercise the important aspects of Aurora at scale, using a volume of data and activity that's representative of your production environment. The objective is to feel confident that the strengths of Aurora match up well with the challenges that cause you to outgrow your previous database infrastructure. At the end of a proof of concept, you have a solid plan to do larger-scale performance benchmarking and application testing. At this point, you understand the biggest work items on your way to a production deployment.
The following advice about best practices can help you avoid common mistakes that cause problems during benchmarking. However, this topic doesn't cover the step-by-step process of performing benchmarks and doing performance tuning. Those procedures vary depending on your workload and the Aurora features that you use. For detailed information, consult performance-related documentation such as Managing performance and scaling for Aurora DB clusters, Amazon Aurora MySQL performance enhancements, Performance and scaling for Amazon Aurora PostgreSQL, and Monitoring DB load with Performance Insights on Amazon Aurora.
The information in this topic applies mainly to applications where your organization writes the code and designs the schema and that support the MySQL and PostgreSQL open-source database engines. If you're testing a commercial application or code generated by an application framework, you might not have the flexibility to apply all of the guidelines. In such cases, check with your AWS representative to see if there are Aurora best practices or case studies for your type of application.
1. Identify your objectives
When you evaluate Aurora as part of a proof of concept, you choose what measurements to make and how to evaluate the success of the exercise.
You must ensure that all of the functionality of your application is compatible with Aurora. Because Aurora major versions are wire-compatible with corresponding major versions of MySQL and PostgreSQL, most applications developed for those engines are also compatible with Aurora. However, you must still validate compatibility on a per-application basis.
For example, some of the configuration choices that you make when you set up an Aurora cluster influence whether you can or should use particular database features. You might start with the most general-purpose kind of Aurora cluster, known as provisioned. You might then decide if a specialized configuration such as serverless or parallel query offers benefits for your workload.
Use the following questions to help identify and quantify your objectives:
-
Does Aurora support all functional use cases of your workload?
-
What dataset size or load level do you want? Can you scale to that level?
-
What are your specific query throughput or latency requirements? Can you reach them?
-
What is the minimum acceptable amount of planned or unplanned downtime for your workload? Can you achieve it?
-
What are the necessary metrics for operational efficiency? Can you accurately monitor them?
-
Does Aurora support your specific business goals, such as cost reduction, increase in deployment, or provisioning speed? Do you have a way to quantify these goals?
-
Can you meet all security and compliance requirements for your workload?
Take some time to build knowledge about Aurora database engines and platform capabilities, and review the
service documentation. Take note of all the features that can help you achieve your desired outcomes. One of
these might be workload consolidation, described in the AWS Database Blog post
How
to plan and optimize Amazon Aurora with MySQL compatibility for consolidated workloads
2. Understand your workload characteristics
Evaluate Aurora in the context of your intended use case. Aurora is a good choice for online transaction processing (OLTP) workloads. You can also run reports on the cluster that holds the real-time OLTP data without provisioning a separate data warehouse cluster. You can recognize if your use case falls into these categories by checking for the following characteristics:
-
High concurrency, with dozens, hundreds, or thousands of simultaneous clients.
-
Large volume of low-latency queries (milliseconds to seconds).
-
Short, real-time transactions.
-
Highly selective query patterns, with index-based lookups.
-
For HTAP, analytical queries that can take advantage of Aurora parallel query.
One of the key factors affecting your database choices is the velocity of the data. High velocity involves data being inserted and updated very frequently. Such a system might have thousands of connections and hundreds of thousands of simultaneous queries reading from and writing to a database. Queries in high-velocity systems usually affect a relatively small number of rows, and typically access multiple columns in the same row.
Aurora is designed to handle high-velocity data. Depending on the workload, an Aurora cluster with a single
r4.16xlarge DB instance can process more than 600,000 SELECT
statements per second. Again
depending on workload, such a cluster can process 200,000 INSERT
, UPDATE
, and
DELETE
statements per second. Aurora is a row store database and is ideally suited for
high-volume, high-throughput, and highly parallelized OLTP workloads.
Aurora can also run reporting queries on the same cluster that handles the OLTP workload. Aurora supports up to 15 replicas, each of which is on average within 10–20 milliseconds of the primary instance. Analysts can query OLTP data in real time without copying the data to a separate data warehouse cluster. With Aurora clusters using the parallel query feature, you can offload much of the processing, filtering, and aggregation work to the massively distributed Aurora storage subsystem.
Use this planning phase to familiarize yourself with the capabilities of Aurora, other AWS services, the AWS Management Console, and the AWS CLI. Also, check how these work with the other tooling that you plan to use in the proof of concept.
3. Practice with the AWS Management Console or AWS CLI
As a next step, practice with the AWS Management Console or the AWS CLI, to become familiar with these tools and with Aurora.
Practice with the AWS Management Console
The following initial activities with Aurora database clusters are mainly so you can familiarize yourself with the AWS Management Console environment and practice setting up and modifying Aurora clusters. If you use the MySQL-compatible and PostgreSQL-compatible database engines with Amazon RDS, you can build on that knowledge when you use Aurora.
By taking advantage of the Aurora shared storage model and features such as replication and snapshots, you can treat entire database clusters as another kind of object that you freely manipulate. You can set up, tear down, and change the capacity of Aurora clusters frequently during the proof of concept. You aren't locked into early choices about capacity, database settings, and physical data layout.
To get started, set up an empty Aurora cluster. Choose the provisioned capacity type and regional location for your initial experiments.
Connect to that cluster using a client program such as a SQL command-line application. Initially, you connect using the cluster endpoint. You connect to that endpoint to perform any write operations, such as data definition language (DDL) statements and extract, transform, load (ETL) processes. Later in the proof of concept, you connect query-intensive sessions using the reader endpoint, which distributes the query workload among multiple DB instances in the cluster.
Scale the cluster out by adding more Aurora Replicas. For those procedures, see Replication with Amazon Aurora. Scale the DB instances up or down by changing the AWS instance class. Understand how Aurora simplifies these kinds of operations, so that if your initial estimates for system capacity are inaccurate, you can adjust later without starting over.
Create a snapshot and restore it to a different cluster.
Examine cluster metrics to see activity over time, and how the metrics apply to the DB instances in the cluster.
It's useful to become familiar with how to do these things through the AWS Management Console in the beginning. After you understand what you can do with Aurora, you can progress to automating those operations using the AWS CLI. In the following sections, you can find more details about the procedures and best practices for these activities during the proof-of-concept period.
Practice with the AWS CLI
We recommend automating deployment and management procedures, even in a proof-of-concept setting. To do so, become familiar with the AWS CLI if you're not already. If you use the MySQL-compatible and PostgreSQL-compatible database engines with Amazon RDS, you can build on that knowledge when you use Aurora.
Aurora typically involves groups of DB instances arranged in clusters. Thus, many operations involve determining which DB instances are associated with a cluster and then performing administrative operations in a loop for all the instances.
For example, you might automate steps such as creating Aurora clusters, then scaling them up with larger instance classes or scaling them out with additional DB instances. Doing so helps you to repeat any stages in your proof of concept and explore what-if scenarios with different kinds or configurations of Aurora clusters.
Learn the capabilities and limitations of infrastructure deployment tools such as AWS CloudFormation. You might find activities that you do in a proof-of-concept context aren't suitable for production use. For example, the AWS CloudFormation behavior for modification is to create a new instance and delete the current one, including its data. For more details on this behavior, see Update behaviors of stack resources in the AWS CloudFormation User Guide.
4. Create your Aurora cluster
With Aurora, you can explore what-if scenarios by adding DB instances to the cluster and scaling up the DB instances to more powerful instance classes. You can also create clusters with different configuration settings to run the same workload side by side. With Aurora, you have a lot of flexibility to set up, tear down, and reconfigure DB clusters. Given this, it's helpful to practice these techniques in the early stages of the proof-of-concept process. For the general procedures to create Aurora clusters, see Creating an Amazon Aurora DB cluster.
Where practical, start with a cluster using the following settings. Skip this step only if you have certain specific use cases in mind. For example, you might skip this step if your use case requires a specialized kind of Aurora cluster. Or you might skip it if you need a particular combination of database engine and version.
-
Turn off Easy create. For the proof of concept, we recommend that you be aware of all the settings you choose so that you can create identical or slightly different clusters later.
-
Use a recent DB engine version. These combinations of database engine and version have wide compatibility with other Aurora features and substantial customer usage for production applications.
-
Aurora MySQL version 3.x (MySQL 8.0 compatibility)
-
Aurora PostgreSQL version 15.x or 16.x
-
-
Choose the Dev/Test template. This choice isn't significant for your proof-of-concept activities.
-
For DB instance class, choose Memory optimized classes and one of the xlarge instance classes. You can adjust the instance class up or down later.
-
Under Multi-AZ Deployment, choose Create an Aurora Replica or Reader node in a different AZ. Many of the most useful aspects of Aurora involve clusters of multiple DB instances. It makes sense to always start with at least two DB instances in any new cluster. Using a different Availability Zone for the second DB instance helps to test different high availability scenarios.
-
When you pick names for the DB instances, use a generic naming convention. Don't refer to any cluster DB instance as the "writer," because different DB instances assume those roles as needed. We recommend using something like
clustername-az-serialnumber
, for examplemyprodappdb-a-01
. These pieces uniquely identify the DB instance and its placement. -
Set the backup retention high for the Aurora cluster. With a long retention period, you can do point-in-time recovery (PITR) for a period up to 35 days. You can reset your database to a known state after running tests involving DDL and data manipulation language (DML) statements. You can also recover if you delete or change data by mistake.
-
Turn on additional recovery, logging, and monitoring features at cluster creation. Turn on all the choices that are available under Backtrack, Performance Insights, Monitoring, and Log exports. With these features enabled, you can test the suitability of features such as backtracking, Enhanced Monitoring, or Performance Insights for your workload. You can also easily investigate performance and perform troubleshooting during the proof of concept.
5. Set up your schema
On the Aurora cluster, set up databases, tables, indexes, foreign keys, and other schema objects for your application. If you're moving from another MySQL-compatible or PostgreSQL-compatible database system, expect this stage to be simple and straightforward. You use the same SQL syntax and command line or other client applications that you're familiar with for your database engine.
To issue SQL statements on your cluster, find its cluster endpoint and supply that value as the connection parameter to your client application. You can find the cluster endpoint on the Connectivity tab of the detail page of your cluster. The cluster endpoint is the one labeled Writer. The other endpoint, labeled Reader, represents a read-only connection that you can supply to end users who run reports or other read-only queries. For help with any issues around connecting to your cluster, see Connecting to an Amazon Aurora DB cluster.
If you're porting your schema and data from a different database system, expect to make some schema changes at this point. These schema changes are to match the SQL syntax and capabilities available in Aurora. You might leave out certain columns, constraints, triggers, or other schema objects at this point. Doing so can be useful particularly if these objects require rework for Aurora compatibility and aren't significant for your objectives with the proof of concept.
If you're migrating from a database system with a different underlying engine than Aurora's, consider
using the AWS Schema Conversion Tool (AWS SCT) to simplify the process. For details, see the
AWS Schema Conversion Tool User Guide.
For general details about migration and porting activities, see the
Migrating Your Databases to Amazon Aurora
During this stage, you can evaluate whether there are inefficiencies in your schema setup, for example in your indexing strategy or other table structures such as partitioned tables. Such inefficiencies can be amplified when you deploy your application on a cluster with multiple DB instances and a heavy workload. Consider whether you can fine-tune such performance aspects now, or during later activities such as a full benchmark test.
6. Import your data
During the proof of concept, you bring across the data, or a representative sample, from your former database system. If practical, set up at least some data in each of your tables. Doing so helps to test compatibility of all data types and schema features. After you have exercised the basic Aurora features, scale up the amount of data. By the time you finish the proof of concept, you should test your ETL tools, queries, and overall workload with a dataset that's big enough to draw accurate conclusions.
You can use several techniques to import either physical or logical backup data to Aurora. For details, see Migrating data to an Amazon Aurora MySQL DB cluster or Migrating data to Amazon Aurora with PostgreSQL compatibility depending on the database engine you're using in the proof of concept.
Experiment with the ETL tools and technologies that you're considering. See which one best meets your needs. Consider both throughput and flexibility. For example, some ETL tools perform a one-time transfer, and others involve ongoing replication from the old system to Aurora.
If you're migrating from a MySQL-compatible system to Aurora MySQL, you can use the native data transfer tools. The same applies if you're migrating from a PostgreSQL-compatible system to Aurora PostgreSQL. If you're migrating from a database system that uses a different underlying engine than Aurora does, you can experiment with the AWS Database Migration Service (AWS DMS). For details about AWS DMS, see the AWS Database Migration Service User Guide.
For details about migration and porting activities, see the AWS whitepaper
Aurora
migration handbook
7. Port your SQL code
Trying out SQL and associated applications requires different levels of effort depending on different cases. In particular, the level of effort depends on whether you move from a MySQL-compatible or PostgreSQL-compatible system or another kind.
-
If you're moving from RDS for MySQL or RDS for PostgreSQL, the SQL changes are small enough that you can try the original SQL code with Aurora and manually incorporate needed changes.
-
Similarly, if you move from an on-premises database compatible with MySQL or PostgreSQL, you can try the original SQL code and manually incorporate changes.
-
If you're coming from a different commercial database, the required SQL changes are more extensive. In this case, consider using the AWS SCT.
During this stage, you can evaluate whether there are inefficiencies in your schema setup, for example in your indexing strategy or other table structures such as partitioned tables. Consider whether you can fine-tune such performance aspects now, or during later activities such as a full benchmark test.
You can verify the database connection logic in your application. To take advantage of Aurora distributed processing, you might need to use separate connections for read and write operations, and use relatively short sessions for query operations. For information about connections, see 9. Connect to Aurora.
Consider if you had to make compromises and tradeoffs to work around issues in your production database. Build time into the proof-of-concept schedule to make improvements to your schema design and queries. To judge if you can achieve easy wins in performance, operating cost, and scalability, try the original and modified applications side by side on different Aurora clusters.
For details about migration and porting activities, see the AWS whitepaper
Aurora
migration handbook
8. Specify configuration settings
You can also review your database configuration parameters as part of the Aurora proof-of-concept exercise. You
might already have MySQL or PostgreSQL configuration settings tuned for performance and scalability in your
current environment. The Aurora storage subsystem is adapted and tuned for a distributed cloud-based
environment with a high-speed storage subsystem. As a result, many former database engine settings don't
apply. We recommend conducting your initial experiments with the default Aurora configuration settings. Reapply
settings from your current environment only if you encounter performance and scalability bottlenecks. If
you're interested, you can look more deeply into this subject in
Introducing the
Aurora storage engine
Aurora makes it easy to reuse the optimal configuration settings for a particular application or use case. Instead of editing a separate configuration file for each DB instance, you manage sets of parameters that you assign to entire clusters or specific DB instances. For example, the time zone setting applies to all DB instances in the cluster, and you can adjust the page cache size setting for each DB instance.
You start with one of the default parameter sets, and apply changes to only the parameters that you need to fine-tune. For details about working with parameter groups, see Amazon Aurora DB cluster and DB instance parameters. For the configuration settings that are or aren't applicable to Aurora clusters, see Aurora MySQL configuration parameters or Amazon Aurora PostgreSQL parameters depending on your database engine.
9. Connect to Aurora
As you find when doing your initial schema and data setup and running sample queries, you can connect to
different endpoints in an Aurora cluster. The endpoint to use depends on whether the operation is a read such
as SELECT
statement, or a write such as a CREATE
or INSERT
statement.
As you increase the workload on an Aurora cluster and experiment with Aurora features, it's important for
your application to assign each operation to the appropriate endpoint.
By using the cluster endpoint for write operations, you always connect to a DB instance in the cluster that has read/write capability. By default, only one DB instance in an Aurora cluster has read/write capability. This DB instance is called the primary instance. If the original primary instance becomes unavailable, Aurora activates a failover mechanism and a different DB instance takes over as the primary.
Similarly, by directing SELECT
statements to the reader endpoint, you spread the work of
processing queries among the DB instances in the cluster. Each reader connection is assigned to a different DB
instance using round-robin DNS resolution. Doing most of the query work on the read-only DB Aurora Replicas
reduces the load on the primary instance, freeing it to handle DDL and DML statements.
Using these endpoints reduces the dependency on hard-coded hostnames, and helps your application to recover more quickly from DB instance failures.
Note
Aurora also has custom endpoints that you create. Those endpoints usually aren't needed during a proof of concept.
The Aurora Replicas are subject to a replica lag, even though that lag is usually 10 to 20 milliseconds. You can monitor the replication lag and decide whether it is within the range of your data consistency requirements. In some cases, your read queries might require strong read consistency (read-after-write consistency). In these cases, you can continue using the cluster endpoint for them and not the reader endpoint.
To take full advantage of Aurora capabilities for distributed parallel execution, you might need to change the
connection logic. Your objective is to avoid sending all read requests to the primary instance. The read-only
Aurora Replicas are standing by, with all the same data, ready to handle SELECT
statements. Code
your application logic to use the appropriate endpoint for each kind of operation. Follow these general
guidelines:
-
Avoid using a single hard-coded connection string for all database sessions.
-
If practical, enclose write operations such as DDL and DML statements in functions in your client application code. That way, you can make different kinds of operations use specific connections.
-
Make separate functions for query operations. Aurora assigns each new connection to the reader endpoint to a different Aurora Replica to balance the load for read-intensive applications.
-
For operations involving sets of queries, close and reopen the connection to the reader endpoint when each set of related queries is finished. Use connection pooling if that feature is available in your software stack. Directing queries to different connections helps Aurora to distribute the read workload among the DB instances in the cluster.
For general information about connection management and endpoints for Aurora, see
Connecting to an Amazon Aurora DB cluster. For a deep dive on this
subject, see
Aurora
MySQL database administrator's handbook – Connection management
10. Run your workload
After the schema, data, and configuration settings are in place, you can begin exercising the cluster by running your workload. Use a workload in the proof of concept that mirrors the main aspects of your production workload. We recommend always making decisions about performance using real-world tests and workloads rather than synthetic benchmarks such as sysbench or TPC-C. Wherever practical, gather measurements based on your own schema, query patterns, and usage volume.
As much as practical, replicate the actual conditions under which the application will run. For example, you typically run your application code on Amazon EC2 instances in the same AWS Region and the same virtual private cloud (VPC) as the Aurora cluster. If your production application runs on multiple EC2 instances spanning multiple Availability Zones, set up your proof-of-concept environment in the same way. For more information on AWS Regions, see Regions and Availability Zones in the Amazon RDS User Guide. To learn more about the Amazon VPC service, see What is Amazon VPC? in the Amazon VPC User Guide.
After you've verified that the basic features of your application work and you can access the data through Aurora, you can exercise aspects of the Aurora cluster. Some features you might want to try are concurrent connections with load balancing, concurrent transactions, and automatic replication.
By this point, the data transfer mechanisms should be familiar, and so you can run tests with a larger proportion of sample data.
This stage is when you can see the effects of changing configuration settings such as memory limits and connection limits. Revisit the procedures that you explored in 8. Specify configuration settings.
You can also experiment with mechanisms such as creating and restoring snapshots. For example, you can create clusters with different AWS instance classes, numbers of AWS Replicas, and so on. Then on each cluster, you can restore the same snapshot containing your schema and all your data. For the details of that cycle, see Creating a DB cluster snapshot and Restoring from a DB cluster snapshot.
11. Measure performance
Best practices in this area are designed to ensure that all the right tools and processes are set up to quickly isolate abnormal behaviors during workload operations. They're also set up to see that you can reliably identify any applicable causes.
You can always see the current state of your cluster, or examine trends over time, by examining the Monitoring tab. This tab is available from the console detail page for each Aurora cluster or DB instance. It displays metrics from the Amazon CloudWatch monitoring service in the form of charts. You can filter the metrics by name, by DB instance, and by time period.
To have more choices on the Monitoring tab, enable Enhanced Monitoring and Performance Insights in the cluster settings. You can also enable those choices later if you didn't choose them when setting up the cluster.
To measure performance, you rely mostly on the charts showing activity for the whole Aurora cluster. You can verify whether the Aurora Replicas have similar load and response times. You can also see how the work is split up between the read/write primary instance and the read-only Aurora Replicas. If there is some imbalance between the DB instances or an issue affecting only one DB instance, you can examine the Monitoring tab for that specific instance.
After the environment and the actual workload are set up to emulate your production application, you can measure how well Aurora performs. The most important questions to answer are as follows:
-
How many queries per second is Aurora processing? You can examine the Throughput metrics to see the figures for various kinds of operations.
-
How long does it take, on average for Aurora to process a given query? You can examine the Latency metrics to see the figures for various kinds of operations.
To view the throughput and latency metrics, check the Monitoring tab
for a given Aurora cluster in the Amazon RDS console
If you can, establish baseline values for these metrics in your current environment. If that's not practical, construct a baseline on the Aurora cluster by executing a workload equivalent to your production application. For example, run your Aurora workload with a similar number of simultaneous users and queries. Then observe how the values change as you experiment with different instance classes, cluster size, configuration settings, and so on.
If the throughput numbers are lower than you expect, investigate further the factors affecting database performance for your workload. Similarly, if the latency numbers are higher than you expect, further investigate. To do so, monitor the secondary metrics for the DB server (CPU, memory, and so on). You can see whether the DB instances are close to their limits. You can also see how much extra capacity your DB instances have to handle more concurrent queries, queries against larger tables, and so on.
Tip
To detect metric values that fall outside the expected ranges, set up CloudWatch alarms.
When evaluating the ideal Aurora cluster size and capacity, you can find the configuration that achieves peak application performance without over-provisioning resources. One important factor is finding the appropriate size for the DB instances in the Aurora cluster. Start by selecting an instance size that has similar CPU and memory capacity to your current production environment. Collect throughput and latency numbers for the workload at that instance size. Then, scale the instance up to the next larger size. See if the throughput and latency numbers improve. Also scale the instance size down, and see if the latency and throughput numbers remain the same. Your goal is to get the highest throughput, with the lowest latency, on the smallest instance possible.
Tip
Size your Aurora clusters and associated DB instances with enough existing capacity to handle sudden, unpredictable traffic spikes. For mission-critical databases, leave at least 20 percent spare CPU and memory capacity.
Run performance tests long enough to measure database performance in a warm, steady state. You might need to
run the workload for many minutes or even a few hours before reaching this steady state. It's normal at the
beginning of a run to have some variance. This variance happens because each Aurora Replica warms up its caches
based on the SELECT
queries that it handles.
Aurora performs best with transactional workloads involving multiple concurrent users and queries. To ensure that you're driving enough load for optimal performance, run benchmarks that use multithreading, or run multiple instances of the performance tests concurrently. Measure performance with hundreds or even thousands of concurrent client threads. Simulate the number of concurrent threads that you expect in your production environment. You might also perform additional stress tests with more threads to measure Aurora scalability.
12. Exercise Aurora high availability
Many of the main Aurora features involve high availability. These features include automatic replication, automatic failover, automatic backups with point-in-time restore, and ability to add DB instances to the cluster. The safety and reliability from features like these are important for mission-critical applications.
To evaluate these features requires a certain mindset. In earlier activities, such as performance measurement, you observe how the system performs when everything works correctly. Testing high availability requires you to think through worst-case behavior. You must consider various kinds of failures, even if such conditions are rare. You might intentionally introduce problems to make sure that the system recovers correctly and quickly.
Tip
For a proof of concept, set up all the DB instances in an Aurora cluster with the same AWS instance class. Doing so makes it possible to try out Aurora availability features without major changes to performance and scalability as you take DB instances offline to simulate failures.
We recommend using at least two instances in each Aurora cluster. The DB instances in an Aurora cluster can span up to three Availability Zones (AZs). Locate each of the first two or three DB instances in a different AZ. When you begin using larger clusters, spread your DB instances across all of the AZs in your AWS Region. Doing so increases fault tolerance capability. Even if a problem affects an entire AZ, Aurora can fail over to a DB instance in a different AZ. If you run a cluster with more than three instances, distribute the DB instances as evenly as you can over all three AZs.
Tip
The storage for an Aurora cluster is independent from the DB instances. The storage for each Aurora cluster always spans three AZs.
When you test high availability features, always use DB instances with identical capacity in your test cluster. Doing so avoids unpredictable changes in performance, latency, and so on whenever one DB instance takes over for another.
To learn how to simulate failure conditions to test high availability features, see Testing Amazon Aurora MySQL using fault injection queries.
As part of your proof-of-concept exercise, one objective is to find the ideal number of DB instances and the optimal instance class for those DB instances. Doing so requires balancing the requirements of high availability and performance.
For Aurora, the more DB instances that you have in a cluster, the greater the benefits for high availability.
Having more DB instances also improves scalability of read-intensive applications. Aurora can distribute
multiple connections for SELECT
queries among the read-only Aurora Replicas.
On the other hand, limiting the number of DB instances reduces the replication traffic from the primary node. The replication traffic consumes network bandwidth, which is another aspect of overall performance and scalability. Thus, for write-intensive OLTP applications, prefer to have a smaller number of large DB instances rather than many small DB instances.
In a typical Aurora cluster, one DB instance (the primary instance) handles all the DDL and DML statements. The
other DB instances (the Aurora Replicas) handle only SELECT
statements. Although the DB instances
don't do exactly the same amount of work, we recommend using the same instance class for all the DB
instances in the cluster. That way, if a failure happens and Aurora promotes one of the read-only DB instances
to be the new primary instance, the primary instance has the same capacity as before.
If you need to use DB instances of different capacities in the same cluster, set up failover tiers for the DB instances. These tiers determine the order in which Aurora Replicas are promoted by the failover mechanism. Put DB instances that are a lot larger or smaller than the others into a lower failover tier. Doing so ensures that they are chosen last for promotion.
Exercise the data recovery features of Aurora, such as automatic point-in-time restore, manual snapshots and restore, and cluster backtracking. If appropriate, copy snapshots to other AWS Regions and restore into other AWS Regions to mimic DR scenarios.
Investigate your organization's requirements for restore time objective (RTO), restore point objective (RPO), and geographic redundancy. Most organizations group these items under the broad category of disaster recovery. Evaluate the Aurora high availability features described in this section in the context of your disaster recovery process to ensure that your RTO and RPO requirements are met.
13. What to do next
At the end of a successful proof-of-concept process, you confirm that Aurora is a suitable solution for you based on the anticipated workload. Throughout the preceding process, you've checked how Aurora works in a realistic operational environment and measured it against your success criteria.
After you get your database environment up and running with Aurora, you can move on to more detailed evaluation
steps, leading to your final migration and production deployment. Depending on your situation, these other
steps might or might not be included in the proof-of-concept process. For details about migration and porting
activities, see the AWS whitepaper
Aurora
migration handbook
In another next step, consider the security configurations relevant for your workload and designed to meet your security requirements in a production environment. Plan what controls to put in place to protect access to the Aurora cluster master user credentials. Define the roles and responsibilities of database users to control access to data stored in the Aurora cluster. Take into account database access requirements for applications, scripts, and third-party tools or services. Explore AWS services and features such as AWS Secrets Manager and AWS Identity and Access Management (IAM) authentication.
At this point, you should understand the procedures and best practices for running benchmark tests with Aurora. You might find you need to do additional performance tuning. For details, see Managing performance and scaling for Aurora DB clusters, Amazon Aurora MySQL performance enhancements, Performance and scaling for Amazon Aurora PostgreSQL, and Monitoring DB load with Performance Insights on Amazon Aurora. If you do additional tuning, make sure that you're familiar with the metrics that you gathered during the proof of concept. For a next step, you might create new clusters with different choices for configuration settings, database engine, and database version. Or you might create specialized kinds of Aurora clusters to match the needs of specific use cases.
For example, you can explore Aurora parallel query clusters for hybrid transaction/analytical processing (HTAP) applications. If wide geographic distribution is crucial for disaster recovery or to minimize latency, you can explore Aurora global databases. If your workload is intermittent or you're using Aurora in a development/test scenario, you can explore Aurora Serverless clusters.
Your production clusters might also need to handle high volumes of incoming connections. To learn those
techniques, see the AWS whitepaper
Aurora
MySQL database administrator's handbook – Connection management
If, after the proof of concept, you decide that your use case is not suited for Aurora, consider these other AWS services:
-
For purely analytic use cases, workloads benefit from a columnar storage format and other features more suitable to OLAP workloads. AWS services that address such use cases include the following:
-
Many workloads benefit from a combination of Aurora with one or more of these services. You can move data between these services by using these:
-
Importing from Amazon S3, as described in the Amazon Aurora User Guide
-
Exporting to Amazon S3, as described in the Amazon Aurora User Guide
-
Many other popular ETL tools