Upgrading the major version of an Amazon Aurora MySQL DB cluster - Amazon Aurora

Upgrading the major version of an Amazon Aurora MySQL DB cluster

In an Aurora MySQL version number such as 2.12.1, the 2 represents the major version. Aurora MySQL version 2 is compatible with MySQL 5.7. Aurora MySQL version 3 is compatible with MySQL 8.0.

Upgrading between major versions requires more extensive planning and testing than for a minor version. The process can take substantial time. After the upgrade is finished, you also might have followup work to do. For example, this might occur because of differences in SQL compatibility or the way certain MySQL-related features work. Or it might occur because of differing parameter settings between the old and new versions.

Upgrading from Aurora MySQL version 2 to version 3

If you have a MySQL 5.7–compatible cluster and want to upgrade it to a MySQL–8.0 compatible cluster, you can do so by running an upgrade process on the cluster itself. This kind of upgrade is an in-place upgrade, in contrast to upgrades that you do by creating a new cluster. This technique keeps the same endpoint and other characteristics of the cluster. The upgrade is relatively fast because it doesn't require copying all your data to a new cluster volume. This stability helps to minimize any configuration changes in your applications. It also helps to reduce the amount of testing for the upgraded cluster. This is because the number of DB instances and their instance classes all stay the same.

The in-place upgrade mechanism involves shutting down your DB cluster while the operation takes place. Aurora performs a clean shutdown and completes outstanding operations such as transaction rollback and undo purge. For more information, see How the Aurora MySQL in-place major version upgrade works.

The in-place upgrade method is convenient, because it is simple to perform and minimizes configuration changes to associated applications. For example, an in-place upgrade preserves the endpoints and set of DB instances for your cluster. However, the time needed for an in-place upgrade can vary depending on the properties of your schema and how busy the cluster is. Thus, depending on the needs for your cluster, you can choose among the upgrade techniques:

For general information about Aurora MySQL version 3 and its new features, see Aurora MySQL version 3 compatible with MySQL 8.0.

For details about planning an upgrade, see Planning a major version upgrade for an Aurora MySQL cluster and How to perform an in-place upgrade.

Planning a major version upgrade for an Aurora MySQL cluster

To help you decide the right time and approach to upgrade, you can learn the differences between Aurora MySQL version 3 and your current environment:

You can also find more MySQL-specific upgrade considerations and tips in Changes in MySQL 8.0 in the MySQL Reference Manual. For example, you can use the command mysqlcheck --check-upgrade to analyze your existing Aurora MySQL databases and identify potential upgrade issues.

Note

We recommend using larger DB instance classes when upgrading to Aurora MySQL version 3 using the in-place upgrade or snapshot restore technique. Examples are db.r5.24xlarge and db.r6g.16xlarge. This helps the upgrade process to complete faster by using the majority of available CPU capacity on the DB instance. You can change to the DB instance class that you want after the major version upgrade is complete.

After you finish the upgrade itself, you can follow the post-upgrade procedures in Post-upgrade cleanup for Aurora MySQL version 3. Finally, test your application's functionality and performance.

If you're converting from RDS from MySQL or community MySQL, follow the migration procedure explained in Migrating data to an Amazon Aurora MySQL DB cluster. In some cases, you might use binary log replication to synchronize your data with an Aurora MySQL version 3 cluster as part of the migration. If so, the source system must run a version that's compatible with your target DB cluster.

To make sure that your applications and administration procedures work smoothly after upgrading a cluster between major versions, do some advance planning and preparation. To see what sorts of management code to update for your AWS CLI scripts or RDS API–based applications, see How in-place upgrades affect the parameter groups for a cluster. Also see Changes to cluster properties between Aurora MySQL versions.

To learn what issues that you might encounter during the upgrade, see Troubleshooting for Aurora MySQL in-place upgrade. For issues that might cause the upgrade to take a long time, you can test those conditions in advance and correct them.

Note

An in-place upgrade involves shutting down your DB cluster while the operation takes place. Aurora MySQL performs a clean shutdown and completes outstanding operations such as undo purge. An upgrade might take a long time if there many undo records to purge. We recommend performing the upgrade only after the history list length (HLL) is low. A generally acceptable value for the HLL is 100,000 or less. For more information, see this blog post.

Simulating the upgrade by cloning your DB cluster

You can check application compatibility, performance, maintenance procedures, and similar considerations for the upgraded cluster. To do so, you can perform a simulation of the upgrade before doing the real upgrade. This technique can be especially useful for production clusters. Here, it's important to minimize downtime and have the upgraded cluster ready to go as soon as the upgrade has finished.

Use the following steps:

  1. Create a clone of the original cluster. Follow the procedure in Cloning a volume for an Amazon Aurora DB cluster.

  2. Set up a similar set of writer and reader DB instances as in the original cluster.

  3. Perform an in-place upgrade of the cloned cluster. Follow the procedure in How to perform an in-place upgrade.

    Start the upgrade immediately after creating the clone. That way, the cluster volume is still identical to the state of the original cluster. If the clone sits idle before you do the upgrade, Aurora performs database cleanup processes in the background. In that case, the upgrade of the clone isn't an accurate simulation of upgrading the original cluster.

  4. Test application compatibility, performance, administration procedures, and so on, using the cloned cluster.

  5. If you encounter any issues, adjust your upgrade plans to account for them. For example, adapt any application code to be compatible with the feature set of the higher version. Estimate how long the upgrade is likely to take based on the amount of data in your cluster. You might also choose to schedule the upgrade for a time when the cluster isn't busy.

  6. After you're satisfied that your applications and workload work properly with the test cluster, you can perform the in-place upgrade for your production cluster.

  7. Work to minimize the total downtime of your cluster during a major version upgrade. To do so, make sure that the workload on the cluster is low or zero at the time of the upgrade. In particular, make sure that there are no long running transactions in progress when you start the upgrade.

Using the blue-green upgrade technique

You can also create a blue/green deployment that runs the old and new clusters side-by-side. Here, you replicate data from the old cluster to the new one until you are ready for the new cluster to take over. For details, see Using Amazon RDS Blue/Green Deployments for database updates.

Major version upgrade prechecks for Aurora MySQL

MySQL 8.0 includes a number of incompatibilities with MySQL 5.7. These incompatibilities can cause problems during an upgrade from Aurora MySQL version 2 to version 3. Some preparation might be required on your database for the upgrade to be successful.

When you start an upgrade from Aurora MySQL version 2 to version 3, Amazon Aurora runs prechecks automatically to detect these incompatibilities.

These prechecks are mandatory. You can't choose to skip them. The prechecks provide the following benefits:

  • They enable you to avoid unplanned downtime during the upgrade.

  • If there are incompatibilities, Amazon Aurora prevents the upgrade and provides a log for you to learn about them. You can then use the log to prepare your database for the upgrade to version 3 by reducing the incompatibilities. For detailed information about removing incompatibilities, see Preparing your installation for upgrade in the MySQL documentation and Upgrading to MySQL 8.0? Here is what you need to know... on the MySQL Server Blog.

    For more information about upgrading to MySQL 8.0, see Upgrading MySQL in the MySQL documentation.

The prechecks include some that are included with MySQL and some that were created specifically by the Aurora team. For information about the prechecks provided by MySQL, see Upgrade checker utility.

The prechecks run before the DB instance is stopped for the upgrade, meaning that they don't cause any downtime when they run. If the prechecks find an incompatibility, Aurora automatically cancels the upgrade before the DB instance is stopped. Aurora also generates an event for the incompatibility. For more information about Amazon Aurora events, see Working with Amazon RDS event notification.

Aurora records detailed information about each incompatibility in the log file PrePatchCompatibility.log. In most cases, the log entry includes a link to the MySQL documentation for correcting the incompatibility. For more information about viewing log files, see Viewing and listing database log files.

Due to the nature of the prechecks, they analyze the objects in your database. This analysis results in resource consumption and increases the time for the upgrade to complete.

Community MySQL upgrade prechecks

The following is a general list of incompatibilities between MySQL 5.7 and 8.0:

  • Your MySQL 5.7–compatible DB cluster must not use features that aren't supported in MySQL 8.0.

    For more information, see Features removed in MySQL 8.0 in the MySQL documentation.

  • There must be no keyword or reserved word violations. Some keywords might be reserved in MySQL 8.0 that were not reserved previously.

    For more information, see Keywords and reserved words in the MySQL documentation.

  • For improved Unicode support, consider converting objects that use the utf8mb3 charset to use the utf8mb4 charset. The utf8mb3 character set is deprecated. Also, consider using utf8mb4 for character set references instead of utf8, because currently utf8 is an alias for the utf8mb3 charset.

    For more information, see The utf8mb3 character set (3-byte UTF-8 unicode encoding) in the MySQL documentation.

  • There must be no InnoDB tables with a nondefault row format.

  • There must be no ZEROFILL or display length type attributes.

  • There must be no partitioned table that uses a storage engine that does not have native partitioning support.

  • There must be no tables in the MySQL 5.7 mysql system database that have the same name as a table used by the MySQL 8.0 data dictionary.

  • There must be no tables that use obsolete data types or functions.

  • There must be no foreign key constraint names longer than 64 characters.

  • There must be no obsolete SQL modes defined in your sql_mode system variable setting.

  • There must be no tables or stored procedures with individual ENUM or SET column elements that exceed 255 characters in length.

  • There must be no table partitions that reside in shared InnoDB tablespaces.

  • There must be no circular references in tablespace data file paths.

  • There must be no queries and stored program definitions that use ASC or DESC qualifiers for GROUP BY clauses.

  • There must be no removed system variables, and system variables must use the new default values for MySQL 8.0.

  • There must be no zero (0) date, datetime, or timestamp values.

  • There must be no schema inconsistencies resulting from file removal or corruption.

  • There must be no table names that contain the FTS character string.

  • There must be no InnoDB tables that belong to a different engine.

  • There must be no table or schema names that are invalid for MySQL 5.7.

For more information about upgrading to MySQL 8.0, see Upgrading MySQL in the MySQL documentation.

Aurora MySQL upgrade prechecks

Aurora MySQL has its own specific requirements when upgrading from version 2 to version 3:

  • There must be no deprecated SQL syntax, such as SQL_CACHE, SQL_NO_CACHE, and QUERY_CACHE, in views, routines, triggers, and events.

  • There must be no FTS_DOC_ID column present on any table without the FTS index.

  • There must be no column definition mismatch between the InnoDB data dictionary and the actual table definition.

  • All database and table names must be lowercase when the lower_case_table_names parameter is set to 1.

  • Events and triggers must not have a missing or empty definer or an invalid creation context.

  • All trigger names in a database must be unique.

  • DDL recovery and Fast DDL aren't supported in Aurora MySQL version 3. There must be no artifacts in databases related to these features.

  • Tables with the REDUNDANT or COMPACT row format can't have indexes larger than 767 bytes.

  • The prefix length of indexes defined on tiny text columns can't exceed 255 bytes. With the utf8mb4 character set, this limits the prefix length supported to 63 characters.

    A larger prefix length was allowed in MySQL 5.7 using the innodb_large_prefix parameter. This parameter is deprecated in MySQL 8.0.

  • There must be no InnoDB metadata inconsistency in the mysql.host table.

  • There must be no column data type mismatch in system tables.

  • There must be no XA transactions in the prepared state.

  • Column names in views can't be longer than 64 characters.

  • Special characters in stored procedures can't be inconsistent.

  • Tables can't have data file path inconsistency.

Aurora MySQL major version upgrade paths

Not all kinds or versions of Aurora MySQL clusters can use the in-place upgrade mechanism. You can learn the appropriate upgrade path for each Aurora MySQL cluster by consulting the following table.

Type of Aurora MySQL DB cluster Can it use in-place upgrade? Action

Aurora MySQL provisioned cluster, 2.0 or higher

Yes

In-place upgrade is supported for 5.7-compatible Aurora MySQL clusters.

For information about upgrading to Aurora MySQL version 3, see Planning a major version upgrade for an Aurora MySQL cluster and How to perform an in-place upgrade.

Aurora MySQL provisioned cluster, 3.01.0 or higher

N/A

Use a minor version upgrade procedure to upgrade between Aurora MySQL version 3 versions.

Aurora Serverless v1 cluster

N/A

Currently, Aurora Serverless v1 is supported for Aurora MySQL only on version 2.

Aurora Serverless v2 cluster

N/A

Currently, Aurora Serverless v2 is supported for Aurora MySQL only on version 3.

Cluster in an Aurora global database

Yes

To upgrade Aurora MySQL from version 2 to version 3, follow the procedure for doing an in-place upgrade for clusters in an Aurora global database. Perform the upgrade on the global cluster. Aurora upgrades the primary cluster and all the secondary clusters in the global database at the same time.

If you use the AWS CLI or RDS API, call the modify-global-cluster command or ModifyGlobalCluster operation instead of modify-db-cluster or ModifyDBCluster.

You can't perform an in-place upgrade from Aurora MySQL version 2 to version 3 if the lower_case_table_names parameter is turned on. For more information, see Major version upgrades.

Parallel query cluster

Yes

You can perform an in-place upgrade. In this case, choose 2.09.1 or higher for the Aurora MySQL version.

Cluster that is the target of binary log replication

Maybe

If the binary log replication is from an Aurora MySQL cluster, you can perform an in-place upgrade. You can't perform the upgrade if the binary log replication is from an RDS for MySQL or an on-premises MySQL DB instance. In that case, you can upgrade using the snapshot restore mechanism.

Cluster with zero DB instances

No

Using the AWS CLI or the RDS API, you can create an Aurora MySQL cluster without any attached DB instances. In the same way, you can also remove all DB instances from an Aurora MySQL cluster while leaving the data in the cluster volume intact. While a cluster has zero DB instances, you can't perform an in-place upgrade.

The upgrade mechanism requires a writer instance in the cluster to perform conversions on the system tables, data files, and so on. In this case, use the AWS CLI or the RDS API to create a writer instance for the cluster. Then you can perform an in-place upgrade.

Cluster with backtrack enabled

Yes

You can perform an in-place upgrade for an Aurora MySQL cluster that uses the backtrack feature. However, after the upgrade, you can't backtrack the cluster to a time before the upgrade.

How the Aurora MySQL in-place major version upgrade works

Aurora MySQL performs a major version upgrade as a multistage process. You can check the current status of an upgrade. Some of the upgrade steps also provide progress information. As each stage begins, Aurora MySQL records an event. You can examine events as they occur on the Events page in the RDS console. For more information about working with events, see Working with Amazon RDS event notification.

Important

Once the process begins, it runs until the upgrade either succeeds or fails. You can't cancel the upgrade while it's underway. If the upgrade fails, Aurora rolls back all the changes and your cluster has the same engine version, metadata, and so on as before.

The upgrade process consists of these stages:

  1. Aurora performs a series of prechecks before beginning the upgrade process. Your cluster keeps running while Aurora does these checks. For example, the cluster can't have any XA transactions in the prepared state or be processing any data definition language (DDL) statements. For example, you might need to shut down applications that are submitting certain kinds of SQL statements. Or you might simply wait until certain long-running statements are finished. Then try the upgrade again. Some checks test for conditions that don't prevent the upgrade but might make the upgrade take a long time.

    If Aurora detects that any required conditions aren't met, modify the conditions identified in the event details. Follow the guidance in Troubleshooting for Aurora MySQL in-place upgrade. If Aurora detects conditions that might cause a slow upgrade, plan to monitor the upgrade over an extended period.

  2. Aurora takes your cluster offline. Then Aurora performs a similar set of tests as in the previous stage, to confirm that no new issues arose during the shutdown process. If Aurora detects any conditions at this point that would prevent the upgrade, Aurora cancels the upgrade and brings the cluster back online. In this case, confirm when the conditions no longer apply and start the upgrade again.

  3. Aurora creates a snapshot of your cluster volume. Suppose that you discover compatibility or other kinds of issues after the upgrade is finished. Or suppose that you want to perform testing using both the original and upgraded clusters. In such cases, you can restore from this snapshot to create a new cluster with the original engine version and the original data.

    Tip

    This snapshot is a manual snapshot. However, Aurora can create it and continue with the upgrade process even if you have reached your quota for manual snapshots. This snapshot remains permanently (if needed) until you delete it. After you finish all post-upgrade testing, you can delete this snapshot to minimize storage charges.

  4. Aurora clones your cluster volume. Cloning is a fast operation that doesn't involve copying the actual table data. If Aurora encounters an issue during the upgrade, it reverts to the original data from the cloned cluster volume and brings the cluster back online. The temporary cloned volume during the upgrade isn't subject to the usual limit on the number of clones for a single cluster volume.

  5. Aurora performs a clean shutdown for the writer DB instance. During the clean shutdown, progress events are recorded every 15 minutes for the following operations. You can examine events as they occur on the Events page in the RDS console.

    • Aurora purges the undo records for old versions of rows.

    • Aurora rolls back any uncommitted transactions.

  6. Aurora upgrades the engine version on the writer DB instance:

    • Aurora installs the binary for the new engine version on the writer DB instance.

    • Aurora uses the writer DB instance to upgrade your data to MySQL 5.7-compatible format. During this stage, Aurora modifies the system tables and performs other conversions that affect the data in your cluster volume. In particular, Aurora upgrades the partition metadata in the system tables to be compatible with the MySQL 5.7 partition format. This stage can take a long time if the tables in your cluster have a large number of partitions.

      If any errors occur during this stage, you can find the details in the MySQL error logs. After this stage starts, if the upgrade process fails for any reason, Aurora restores the original data from the cloned cluster volume.

  7. Aurora upgrades the engine version on the reader DB instances.

  8. The upgrade process is completed. Aurora records a final event to indicate that the upgrade process completed successfully. Now your DB cluster is running the new major version.

Alternative blue-green upgrade technique

In some situations, your top priority is to perform an immediate switchover from the old cluster to an upgraded one. In such situations, you can use a multistep process that runs the old and new clusters side-by-side. Here, you replicate data from the old cluster to the new one until you are ready for the new cluster to take over. For details, see Using Amazon RDS Blue/Green Deployments for database updates.

How to perform an in-place upgrade

We recommend that you review the background material in How the Aurora MySQL in-place major version upgrade works.

Perform any preupgrade planning and testing, as described in Planning a major version upgrade for an Aurora MySQL cluster.

The following example upgrades the mydbcluster-cluster DB cluster to Aurora MySQL version 3.04.1.

To upgrade the major version of an Aurora MySQL DB cluster
  1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.

  2. If you used a custom parameter group for the original DB cluster, create a corresponding parameter group compatible with the new major version. Make any necessary adjustments to the configuration parameters in that new parameter group. For more information, see How in-place upgrades affect the parameter groups for a cluster.

  3. In the navigation pane, choose Databases.

  4. In the list, choose the DB cluster that you want to modify.

  5. Choose Modify.

  6. For Version, choose a new Aurora MySQL major version.

    We generally recommend using the latest minor version of the major version. Here, we choose the current default version.

    
                                In-place upgrade of an Aurora MySQL DB cluster from version 2 to version 3
  7. Choose Continue.

  8. On the next page, specify when to perform the upgrade. Choose During the next scheduled maintenance window or Immediately.

  9. (Optional) Periodically examine the Events page in the RDS console during the upgrade. Doing so helps you to monitor the progress of the upgrade and identify any issues. If the upgrade encounters any issues, consult Troubleshooting for Aurora MySQL in-place upgrade for the steps to take.

  10. If you created a new parameter group at the start of this procedure, associate the custom parameter group with your upgraded cluster. For more information, see How in-place upgrades affect the parameter groups for a cluster.

    Note

    Performing this step requires you to restart the cluster again to apply the new parameter group.

  11. (Optional) After you complete any post-upgrade testing, delete the manual snapshot that Aurora created at the beginning of the upgrade.

To upgrade the major version of an Aurora MySQL DB cluster, use the AWS CLI modify-db-cluster command with the following required parameters:

  • --db-cluster-identifier

  • --engine-version

  • --allow-major-version-upgrade

  • --apply-immediately or --no-apply-immediately

If your cluster uses any custom parameter groups, also include one or both of the following options:

  • --db-cluster-parameter-group-name, if the cluster uses a custom cluster parameter group

  • --db-instance-parameter-group-name, if any instances in the cluster use a custom DB parameter group

The following example upgrades the sample-cluster DB cluster to Aurora MySQL version 3.04.1. The upgrade happens immediately, instead of waiting for the next maintenance window.

Example

For Linux, macOS, or Unix:

aws rds modify-db-cluster \ --db-cluster-identifier sample-cluster \ --engine-version 8.0.mysql_aurora.3.04.1 \ --allow-major-version-upgrade \ --apply-immediately

For Windows:

aws rds modify-db-cluster ^ --db-cluster-identifier sample-cluster ^ --engine-version 8.0.mysql_aurora.3.04.1 ^ --allow-major-version-upgrade ^ --apply-immediately

You can combine other CLI commands with modify-db-cluster to create an automated end-to-end process for performing and verifying upgrades. For more information and examples, see Aurora MySQL in-place upgrade tutorial.

Note

If your cluster is part of an Aurora global database, the in-place upgrade procedure is slightly different. You call the modify-global-cluster command operation instead of modify-db-cluster. For more information, see In-place major upgrades for global databases.

To upgrade the major version of an Aurora MySQL DB cluster, use the RDS API operation ModifyDBCluster with the following required parameters:

  • DBClusterIdentifier

  • Engine

  • EngineVersion

  • AllowMajorVersionUpgrade

  • ApplyImmediately (set to true or false)

Note

If your cluster is part of an Aurora global database, the in-place upgrade procedure is slightly different. You call the ModifyGlobalCluster operation instead of ModifyDBCluster. For more information, see In-place major upgrades for global databases.

How in-place upgrades affect the parameter groups for a cluster

Aurora parameter groups have different sets of configuration settings for clusters that are compatible with MySQL 5.7 or 8.0. When you perform an in-place upgrade, the upgraded cluster and all its instances must use the corresponding cluster and instance parameter groups:

Your cluster and instances might use the default 5.7-compatible parameter groups. If so, the upgraded cluster and instance start with the default 8.0-compatible parameter groups. If your cluster and instances use any custom parameter groups, make sure to create corresponding or 8.0-compatible parameter groups. Also make sure to specify those during the upgrade process.

Note

For most parameter settings, you can choose the custom parameter group at two points. These are when you create the cluster or associate the parameter group with the cluster later.

However, if you use a nondefault setting for the lower_case_table_names parameter, you must set up the custom parameter group with this setting in advance. Then specify the parameter group when you perform the snapshot restore to create the cluster. Any change to the lower_case_table_names parameter has no effect after the cluster is created.

We recommend that you use the same setting for lower_case_table_names when you upgrade from Aurora MySQL version 2 to version 3.

With an Aurora global database based on Aurora MySQL, you can't perform an in-place upgrade from Aurora MySQL version 2 to version 3 if the lower_case_table_names parameter is turned on. For more information on the methods that you can use, see Major version upgrades.

Important

If you specify any custom parameter group during the upgrade process, make sure to manually reboot the cluster after the upgrade finishes. Doing so makes the cluster begin using your custom parameter settings.

Changes to cluster properties between Aurora MySQL versions

When you upgrade from Aurora MySQL version 2 to version 3, make sure to check any applications or scripts that you use to set up or manage Aurora MySQL clusters and DB instances.

Also, change your code that manipulates parameter groups to account for the fact that the default parameter group names are different for 5.7- and 8.0-compatible clusters. The default parameter group names for Aurora MySQL version 2 and 3 clusters are default.aurora-mysql5.7 and default.aurora-mysql8.0, respectively.

For example, you might have code like the following that applies to your cluster before an upgrade.

# Check the default parameter values for MySQL 5.7–compatible clusters. aws rds describe-db-parameters --db-parameter-group-name default.aurora-mysql5.7 --region us-east-1

After upgrading the major version of the cluster, modify that code as follows.

# Check the default parameter values for MySQL 8.0–compatible clusters. aws rds describe-db-parameters --db-parameter-group-name default.aurora-mysql8.0 --region us-east-1

In-place major upgrades for global databases

For an Aurora global database, you upgrade the global database cluster. Aurora automatically upgrades all of the clusters at the same time and makes sure that they all run the same engine version. This requirement is because any changes to system tables, data file formats, and so on, are automatically replicated to all the secondary clusters.

Follow the instructions in How the Aurora MySQL in-place major version upgrade works. When you specify what to upgrade, make sure to choose the global database cluster instead of one of the clusters it contains.

If you use the AWS Management Console, choose the item with the role Global database.


        Upgrading global database cluster

If you use the AWS CLI or RDS API, start the upgrade process by calling the modify-global-cluster command or ModifyGlobalCluster operation. You use one of these instead of modify-db-cluster or ModifyDBCluster.

Note

You can't specify a custom parameter group for the global database cluster while you're performing a major version upgrade of that Aurora global database. Create your custom parameter groups in each Region of the global cluster. Then apply them manually to the Regional clusters after the upgrade.

To upgrade the major version of an Aurora MySQL global database cluster by using the AWS CLI, use the modify-global-cluster command with the following required parameters:

  • --global-cluster-identifier

  • --engine aurora-mysql

  • --engine-version

  • --allow-major-version-upgrade

The following example upgrades the global database cluster to Aurora MySQL version 2.10.2.

Example

For Linux, macOS, or Unix:

aws rds modify-global-cluster \ --global-cluster-identifier global_cluster_identifier \ --engine aurora-mysql \ --engine-version 5.7.mysql_aurora.2.10.2 \ --allow-major-version-upgrade

For Windows:

aws rds modify-global-cluster ^ --global-cluster-identifier global_cluster_identifier ^ --engine aurora-mysql ^ --engine-version 5.7.mysql_aurora.2.10.2 ^ --allow-major-version-upgrade

Backtrack considerations

If the cluster that you upgraded had the Backtrack feature enabled, you can't backtrack the upgraded cluster to a time that's before the upgrade.

Aurora MySQL in-place upgrade tutorial

The following Linux examples show how you might perform the general steps of the in-place upgrade procedure using the AWS CLI.

This first example creates an Aurora DB cluster that's running a 2.x version of Aurora MySQL. The cluster includes a writer DB instance and a reader DB instance. The wait db-instance-available command pauses until the writer DB instance is available. That's the point when the cluster is ready to be upgraded.

aws rds create-db-cluster --db-cluster-identifier mynewdbcluster --engine aurora-mysql \ --db-cluster-version 5.7.mysql_aurora.2.10.2 ... aws rds create-db-instance --db-instance-identifier mynewdbcluster-instance1 \ --db-cluster-identifier mynewdbcluster --db-instance-class db.t4g.medium --engine aurora-mysql ... aws rds wait db-instance-available --db-instance-identifier mynewdbcluster-instance1

The Aurora MySQL 3.x versions that you can upgrade the cluster to depend on the 2.x version that the cluster is currently running and on the AWS Region where the cluster is located. The first command, with --output text, just shows the available target version. The second command shows the full JSON output of the response. In that response, you can see details such as the aurora-mysql value that you use for the engine parameter. You can also see the fact that upgrading to 3.02.0 represents a major version upgrade.

aws rds describe-db-clusters --db-cluster-identifier mynewdbcluster \ --query '*[].{EngineVersion:EngineVersion}' --output text 5.7.mysql_aurora.2.10.2 aws rds describe-db-engine-versions --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.10.2 \ --query '*[].[ValidUpgradeTarget]' ... { "Engine": "aurora-mysql", "EngineVersion": "8.0.mysql_aurora.3.02.0", "Description": "Aurora MySQL 3.02.0 (compatible with MySQL 8.0.23)", "AutoUpgrade": false, "IsMajorVersionUpgrade": true, "SupportedEngineModes": [ "provisioned" ], "SupportsParallelQuery": true, "SupportsGlobalDatabases": true, "SupportsBabelfish": false }, ...

This example shows how if you enter a target version number that isn't a valid upgrade target for the cluster, Aurora doesn't perform the upgrade. Aurora also doesn't perform a major version upgrade unless you include the --allow-major-version-upgrade parameter. That way, you can't accidentally perform an upgrade that has the potential to require extensive testing and changes to your application code.

aws rds modify-db-cluster --db-cluster-identifier mynewdbcluster \ --engine-version 5.7.mysql_aurora.2.09.2 --apply-immediately An error occurred (InvalidParameterCombination) when calling the ModifyDBCluster operation: Cannot find upgrade target from 5.7.mysql_aurora.2.10.2 with requested version 5.7.mysql_aurora.2.09.2. aws rds modify-db-cluster --db-cluster-identifier mynewdbcluster \ --engine-version 8.0.mysql_aurora.3.02.0 --region us-east-1 --apply-immediately An error occurred (InvalidParameterCombination) when calling the ModifyDBCluster operation: The AllowMajorVersionUpgrade flag must be present when upgrading to a new major version. aws rds modify-db-cluster --db-cluster-identifier mynewdbcluster \ --engine-version 8.0.mysql_aurora.3.02.0 --apply-immediately --allow-major-version-upgrade { "DBClusterIdentifier": "mynewdbcluster", "Status": "available", "Engine": "aurora-mysql", "EngineVersion": "5.7.mysql_aurora.2.10.2" }

It takes a few moments for the status of the cluster and associated DB instances to change to upgrading. The version numbers for the cluster and DB instances only change when the upgrade is finished. Again, you can use the wait db-instance-available command for the writer DB instance to wait until the upgrade is finished before proceeding.

aws rds describe-db-clusters --db-cluster-identifier mynewdbcluster \ --query '*[].[Status,EngineVersion]' --output text upgrading 5.7.mysql_aurora.2.10.2 aws rds describe-db-instances --db-instance-identifier mynewdbcluster-instance1 \ --query '*[].{DBInstanceIdentifier:DBInstanceIdentifier,DBInstanceStatus:DBInstanceStatus} | [0]' { "DBInstanceIdentifier": "mynewdbcluster-instance1", "DBInstanceStatus": "upgrading" } aws rds wait db-instance-available --db-instance-identifier mynewdbcluster-instance1

At this point, the version number for the cluster matches the one that was specified for the upgrade.

aws rds describe-db-clusters --db-cluster-identifier mynewdbcluster \ --query '*[].[EngineVersion]' --output text 8.0.mysql_aurora.3.02.0

The preceding example did an immediate upgrade by specifying the --apply-immediately parameter. To let the upgrade happen at a convenient time when the cluster isn't expected to be busy, you can specify the --no-apply-immediately parameter. Doing so makes the upgrade start during the next maintenance window for the cluster. The maintenance window defines the period during which maintenance operations can start. A long-running operation might not finish during the maintenance window. Thus, you don't need to define a larger maintenance window even if you expect that the upgrade might take a long time.

The following example upgrades a cluster that's initially running Aurora MySQL version 2.10.2. In the describe-db-engine-versions output, the False and True values represent the IsMajorVersionUpgrade property. From version 2.10.2, you can upgrade to some other 2.* versions. Those upgrades aren't considered major version upgrades and so don't require an in-place upgrade. In-place upgrade is only available for upgrades to the 3.* versions that are shown in the list.

aws rds describe-db-clusters --db-cluster-identifier mynewdbcluster \ --query '*[].{EngineVersion:EngineVersion}' --output text 5.7.mysql_aurora.2.10.2 aws rds describe-db-engine-versions --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.10.2 \ --query '*[].[ValidUpgradeTarget]|[0][0]|[*].[EngineVersion,IsMajorVersionUpgrade]' --output text 5.7.mysql_aurora.2.10.3 False 5.7.mysql_aurora.2.11.0 False 5.7.mysql_aurora.2.11.1 False 8.0.mysql_aurora.3.01.1 True 8.0.mysql_aurora.3.02.0 True 8.0.mysql_aurora.3.02.2 True aws rds modify-db-cluster --db-cluster-identifier mynewdbcluster \ --engine-version 8.0.mysql_aurora.3.02.0 --no-apply-immediately --allow-major-version-upgrade ...

When a cluster is created without a specified maintenance window, Aurora picks a random day of the week. In this case, the modify-db-cluster command is submitted on a Monday. Thus, we change the maintenance window to be Tuesday morning. All times are represented in the UTC time zone. The tue:10:00-tue:10:30 window corresponds to 2:00-2:30 AM Pacific time. The change in the maintenance window takes effect immediately.

aws rds describe-db-clusters --db-cluster-identifier mynewdbcluster --query '*[].[PreferredMaintenanceWindow]' [ [ "sat:08:20-sat:08:50" ] ] aws rds modify-db-cluster --db-cluster-identifier mynewdbcluster --preferred-maintenance-window tue:10:00-tue:10:30" aws rds describe-db-clusters --db-cluster-identifier mynewdbcluster --query '*[].[PreferredMaintenanceWindow]' [ [ "tue:10:00-tue:10:30" ] ]

The following example shows how to get a report of the events generated by the upgrade. The --duration argument represents the number of minutes to retrieve the event information. This argument is needed, because by default describe-events only returns events from the last hour.

aws rds describe-events --source-type db-cluster --source-identifier mynewdbcluster --duration 20160 { "Events": [ { "SourceIdentifier": "mynewdbcluster", "SourceType": "db-cluster", "Message": "DB cluster created", "EventCategories": [ "creation" ], "Date": "2022-11-17T01:24:11.093000+00:00", "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster" }, { "SourceIdentifier": "mynewdbcluster", "SourceType": "db-cluster", "Message": "Upgrade in progress: Performing online pre-upgrade checks.", "EventCategories": [ "maintenance" ], "Date": "2022-11-18T22:57:08.450000+00:00", "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster" }, { "SourceIdentifier": "mynewdbcluster", "SourceType": "db-cluster", "Message": "Upgrade in progress: Performing offline pre-upgrade checks.", "EventCategories": [ "maintenance" ], "Date": "2022-11-18T22:57:59.519000+00:00", "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster" }, { "SourceIdentifier": "mynewdbcluster", "SourceType": "db-cluster", "Message": "Upgrade in progress: Creating pre-upgrade snapshot [preupgrade-mynewdbcluster-5-7-mysql-aurora-2-10-2-to-8-0-mysql-aurora-3-02-0-2022-11-18-22-55].", "EventCategories": [ "maintenance" ], "Date": "2022-11-18T23:00:22.318000+00:00", "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster" }, { "SourceIdentifier": "mynewdbcluster", "SourceType": "db-cluster", "Message": "Upgrade in progress: Cloning volume.", "EventCategories": [ "maintenance" ], "Date": "2022-11-18T23:01:45.428000+00:00", "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster" }, { "SourceIdentifier": "mynewdbcluster", "SourceType": "db-cluster", "Message": "Upgrade in progress: Purging undo records for old row versions. Records remaining: 164", "EventCategories": [ "maintenance" ], "Date": "2022-11-18T23:02:25.141000+00:00", "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster" }, { "SourceIdentifier": "mynewdbcluster", "SourceType": "db-cluster", "Message": "Upgrade in progress: Purging undo records for old row versions. Records remaining: 164", "EventCategories": [ "maintenance" ], "Date": "2022-11-18T23:06:23.036000+00:00", "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster" }, { "SourceIdentifier": "mynewdbcluster", "SourceType": "db-cluster", "Message": "Upgrade in progress: Upgrading database objects.", "EventCategories": [ "maintenance" ], "Date": "2022-11-18T23:06:48.208000+00:00", "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster" }, { "SourceIdentifier": "mynewdbcluster", "SourceType": "db-cluster", "Message": "Database cluster major version has been upgraded", "EventCategories": [ "maintenance" ], "Date": "2022-11-18T23:10:28.999000+00:00", "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mynewdbcluster" } ] }

Finding the reasons for upgrade failures

In the previous tutorial, the upgrade from Aurora MySQL version 2 to version 3 succeeded. But if the upgrade had failed, you would want to know why.

You can start by using the describe-events AWS CLI command to look at the DB cluster events. This example shows the events for mydbcluster for the last 10 hours.

aws rds describe-events \ --source-type db-cluster \ --source-identifier mydbcluster \ --duration 600

In this case, we had an upgrade precheck failure.

{ "Events": [ { "SourceIdentifier": "mydbcluster", "SourceType": "db-cluster", "Message": "Database cluster engine version upgrade started.", "EventCategories": [ "maintenance" ], "Date": "2024-04-11T13:23:22.846000+00:00", "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mydbcluster" }, { "SourceIdentifier": "mydbcluster", "SourceType": "db-cluster", "Message": "Database cluster is in a state that cannot be upgraded: Upgrade prechecks failed. For more details, see the upgrade-prechecks.log file. For more information on troubleshooting the cause of the upgrade failure, see https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.MajorVersionUpgrade.html#AuroraMySQL.Upgrading.Troubleshooting.", "EventCategories": [ "maintenance" ], "Date": "2024-04-11T13:23:24.373000+00:00", "SourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:mydbcluster" } ] }

To diagnose the exact cause of the problem, examine the database logs for the writer DB instance. When an upgrade to Aurora MySQL version 3 fails, the writer instance contains a log file with the name upgrade-prechecks.log. This example shows how to detect the presence of that log and then download it to a local file for examination.

aws rds describe-db-log-files --db-instance-identifier mydbcluster-instance \ --query '*[].[LogFileName]' --output text error/mysql-error-running.log error/mysql-error-running.log.2024-04-11.20 error/mysql-error-running.log.2024-04-11.21 error/mysql-error.log external/mysql-external.log upgrade-prechecks.log aws rds download-db-log-file-portion --db-instance-identifier mydbcluster-instance \ --log-file-name upgrade-prechecks.log \ --starting-token 0 \ --output text >upgrade_prechecks.log

The upgrade-prechecks.log file is in JSON format. We download it using the --output text option to avoid encoding JSON output within another JSON wrapper. For Aurora MySQL version 3 upgrades, this log always includes certain informational and warning messages. It only includes error messages if the upgrade fails. If the upgrade succeeds, the log file isn't produced at all.

To summarize all of the errors and display the associated object and description fields, you can run the command grep -A 2 '"level": "Error"' on the contents of the upgrade-prechecks.log file. Doing so displays each error line and the two lines after it. These contain the name of the corresponding database object and guidance about how to correct the problem.

$ cat upgrade-prechecks.log | grep -A 2 '"level": "Error"' "level": "Error", "dbObject": "problematic_upgrade.dangling_fulltext_index", "description": "Table `problematic_upgrade.dangling_fulltext_index` contains dangling FULLTEXT index. Kindly recreate the table before upgrade."

In this example, you can run the following SQL command on the offending table to try to fix the issue, or you can re-create the table without the dangling index.

OPTIMIZE TABLE problematic_upgrade.dangling_fulltext_index;

Then retry the upgrade.

Troubleshooting for Aurora MySQL in-place upgrade

Use the following tips to help troubleshoot problems with Aurora MySQL in-place upgrades. These tips don't apply to Aurora Serverless DB clusters.

Reason for in-place upgrade being canceled or slow Effect Solution to allow in-place upgrade to complete within maintenance window
Associated Aurora cross-Region replica isn't patched yet Aurora cancels the upgrade. Upgrade the Aurora cross-Region replica and try again.
Cluster has XA transactions in the prepared state Aurora cancels the upgrade. Commit or roll back all prepared XA transactions.
Cluster is processing any data definition language (DDL) statements Aurora cancels the upgrade. Consider waiting and performing the upgrade after all DDL statements are finished.
Cluster has uncommitted changes for many rows Upgrade might take a long time.

The upgrade process rolls back the uncommitted changes. The indicator for this condition is the value of TRX_ROWS_MODIFIED in the INFORMATION_SCHEMA.INNODB_TRX table.

Consider performing the upgrade only after all large transactions are committed or rolled back.

Cluster has high number of undo records Upgrade might take a long time.

Even if the uncommitted transactions don't affect a large number of rows, they might involve a large volume of data. For example, you might be inserting large BLOBs. Aurora doesn't automatically detect or generate an event for this kind of transaction activity. The indicator for this condition is the history list length (HLL). The upgrade process rolls back the uncommitted changes.

You can check the HLL in the output from the SHOW ENGINE INNODB STATUS SQL command, or directly by using the following SQL query:

SELECT count FROM information_schema.innodb_metrics WHERE name = 'trx_rseg_history_len';

You can also monitor the RollbackSegmentHistoryListLength metric in Amazon CloudWatch.

Consider performing the upgrade only after the HLL is smaller.

Cluster is in the process of committing a large binary log transaction Upgrade might take a long time.

The upgrade process waits until the binary log changes are applied. More transactions or DDL statements could start during this period, further slowing down the upgrade process.

Schedule the upgrade process when the cluster isn't busy with generating binary log replication changes. Aurora doesn't automatically detect or generate an event for this condition.

Schema inconsistencies resulting from file removal or corruption Aurora cancels the upgrade.

Change the default storage engine for temporary tables from MyISAM to InnoDB. Perform the following steps:

  1. Modify the default_tmp_storage_engine DB parameter to InnoDB.

  2. Reboot the DB cluster.

  3. After rebooting, confirm that the default_tmp_storage_engine DB parameter is set to InnoDB. Use the following command:

    show global variables like 'default_tmp_storage_engine';
  4. Make sure not to create any temporary tables that use the MyISAM storage engine. We recommend that you pause any database workload and not create any new database connections, because you're upgrading soon.

  5. Try the in-place upgrade again.

Master user was deleted Aurora cancels the upgrade.
Important

Don't delete the master user.

However, if for some reason you should happen to delete the master user, restore it using the following SQL commands:

CREATE USER 'master_username'@'%' IDENTIFIED BY 'master_user_password' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, LOAD FROM S3, SELECT INTO S3, INVOKE LAMBDA, INVOKE SAGEMAKER, INVOKE COMPREHEND ON *.* TO 'master_username'@'%' WITH GRANT OPTION;

For more details on troubleshooting issues that cause upgrade prechecks to fail, see the following blogs:

You can use the following steps to perform your own checks for some of the conditions in the preceding table. That way, you can schedule the upgrade at a time when you know the database is in a state where the upgrade can complete successfully and quickly.

  • You can check for open XA transactions by executing the XA RECOVER statement. You can then commit or roll back the XA transactions before starting the upgrade.

  • You can check for DDL statements by executing a SHOW PROCESSLIST statement and looking for CREATE, DROP, ALTER, RENAME, and TRUNCATE statements in the output. Allow all DDL statements to finish before starting the upgrade.

  • You can check the total number of uncommitted rows by querying the INFORMATION_SCHEMA.INNODB_TRX table. The table contains one row for each transaction. The TRX_ROWS_MODIFIED column contains the number of rows modified or inserted by the transaction.

  • You can check the length of the InnoDB history list by executing the SHOW ENGINE INNODB STATUS SQL statement and looking for the History list length in the output. You can also check the value directly by running the following query:

    SELECT count FROM information_schema.innodb_metrics WHERE name = 'trx_rseg_history_len';

    The length of the history list corresponds to the amount of undo information stored by the database to implement multi-version concurrency control (MVCC).

Post-upgrade cleanup for Aurora MySQL version 3

After you finish upgrading any Aurora MySQL version 2 clusters to Aurora MySQL version 3, you can perform these other cleanup actions:

  • Create new MySQL 8.0–compatible versions of any custom parameter groups. Apply any necessary custom parameter values to the new parameter groups.

  • Update any CloudWatch alarms, setup scripts, and so on to use the new names for any metrics whose names were affected by inclusive language changes. For a list of such metrics, see Inclusive language changes for Aurora MySQL version 3.

  • Update any AWS CloudFormation templates to use the new names for any configuration parameters whose names were affected by inclusive language changes. For a list of such parameters, see Inclusive language changes for Aurora MySQL version 3.

Spatial indexes

After upgrading to Aurora MySQL version 3, check if you need to drop or recreate objects and indexes related to spatial indexes. Before MySQL 8.0, Aurora could optimize spatial queries using indexes that didn't contain a spatial resource identifier (SRID). Aurora MySQL version 3 only uses spatial indexes containing SRIDs. During an upgrade, Aurora automatically drops any spatial indexes without SRIDs and prints warning messages in the database log. If you observe such warning messages, create new spatial indexes with SRIDs after the upgrade. For more information about changes to spatial functions and data types in MySQL 8.0, see Changes in MySQL 8.0 in the MySQL Reference Manual.