Publishing Aurora PostgreSQL logs to Amazon CloudWatch Logs - Amazon Aurora

Publishing Aurora PostgreSQL logs to Amazon CloudWatch Logs

You can configure your Aurora PostgreSQL DB cluster to publish log data to a log group in Amazon CloudWatch Logs. With CloudWatch Logs, you can perform real-time analysis of the log data, and use CloudWatch to create alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable storage. Unlike with RDS for PostgreSQL which lets you publish both upgrade and postgresql logs, Aurora PostgreSQL supports uploading postgresql logs only to CloudWatch Logs.

Aurora PostgreSQL supports publishing logs to CloudWatch Logs for the following versions:

  • 13.3 and higher 13 versions

  • 12.4 and higher 12 versions

  • 11.6 and higher 11 versions

  • 10.11 and higher 10 versions

Note

Be aware of the following:

  • If exporting log data is disabled, Aurora doesn't delete existing log groups or log streams.

  • If exporting log data is disabled, existing log data remains available in CloudWatch Logs based on log retention settings, which means you still incur charges for stored audit log data. You can delete log streams and log groups using the CloudWatch Logs console, the AWS CLI, or the CloudWatch Logs API.

  • If you don't want to export audit logs to CloudWatch Logs, make sure that all methods of exporting audit logs are disabled. These methods are the AWS Management Console, the AWS CLI, and the RDS API.

Publishing logs to Amazon CloudWatch

You can use the AWS Management Console, the AWS CLI, or the RDS API to publish Aurora PostgreSQL logs to Amazon CloudWatch Logs.

You can publish Aurora PostgreSQL logs to CloudWatch Logs with the console.

To publish Aurora PostgreSQL logs from the console

  1. Open the Amazon RDS console at https://console.aws.amazon.com/rds/.

  2. In the navigation pane, choose Databases.

  3. Choose the Aurora PostgreSQL DB cluster that you want to publish the log data for.

  4. Choose Modify.

  5. In the Log exports section, choose Postgresql log.

  6. Choose Continue, and then choose Modify cluster on the summary page.

You can publish Aurora PostgreSQL logs with the AWS CLI. You can run the modify-db-cluster AWS CLI command with the following options:

  • --db-cluster-identifier—The DB cluster identifier.

  • --cloudwatch-logs-export-configuration—The configuration setting for the log types to be set for export to CloudWatch Logs for the DB cluster.

You can also publish Aurora PostgreSQL logs by running one of the following AWS CLI commands:

Run one of these AWS CLI commands with the following options:

  • --db-cluster-identifier—The DB cluster identifier.

  • --engine—The database engine.

  • --enable-cloudwatch-logs-exports—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

Other options might be required depending on the AWS CLI command that you run.

The following command creates an Aurora PostgreSQL DB cluster to publish log files to CloudWatch Logs.

For Linux, macOS, or Unix:

aws rds create-db-cluster \ --db-cluster-identifier my-db-cluster \ --engine aurora-postgresql \ --enable-cloudwatch-logs-exports postgresql

For Windows:

aws rds create-db-cluster ^ --db-cluster-identifier my-db-cluster ^ --engine aurora-postgresql ^ --enable-cloudwatch-logs-exports postgresql

The following command modifies an existing Aurora PostgreSQL DB cluster to publish log files to CloudWatch Logs. The --cloudwatch-logs-export-configuration value is a JSON object. The key for this object is EnableLogTypes, and its value is postgresql.

For Linux, macOS, or Unix:

aws rds modify-db-cluster \ --db-cluster-identifier my-db-cluster \ --cloudwatch-logs-export-configuration '{"EnableLogTypes":["postgresql"]}'

For Windows:

aws rds modify-db-cluster ^ --db-cluster-identifier my-db-cluster ^ --cloudwatch-logs-export-configuration '{\"EnableLogTypes\":[\"postgresql\"]}'
Note

When using the Windows command prompt, make sure to escape double quotation marks (") in JSON code by prefixing them with a backslash (\).

The following example modifies an existing Aurora PostgreSQL DB cluster to disable publishing log files to CloudWatch Logs. The --cloudwatch-logs-export-configuration value is a JSON object. The key for this object is DisableLogTypes, and its value is postgresql.

For Linux, macOS, or Unix:

aws rds modify-db-cluster \ --db-cluster-identifier mydbinstance \ --cloudwatch-logs-export-configuration '{"DisableLogTypes":["postgresql"]}'

For Windows:

aws rds modify-db-cluster ^ --db-cluster-identifier mydbinstance ^ --cloudwatch-logs-export-configuration "{\"DisableLogTypes\":[\"postgresql\"]}"
Note

When using the Windows command prompt, you must escape double quotes (") in JSON code by prefixing them with a backslash (\).

You can publish Aurora PostgreSQL logs with the RDS API. You can run the ModifyDBCluster operation with the following options:

  • DBClusterIdentifier – The DB cluster identifier.

  • CloudwatchLogsExportConfiguration – The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

You can also publish Aurora PostgreSQL logs with the RDS API by running one of the following RDS API operations:

Run the RDS API action with the following parameters:

  • DBClusterIdentifier—The DB cluster identifier.

  • Engine—The database engine.

  • EnableCloudwatchLogsExports—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

Other parameters might be required depending on the AWS CLI command that you run.

Monitoring log events in Amazon CloudWatch

After enabling Aurora PostgreSQL log events, you can monitor the events in Amazon CloudWatch Logs. For more information about monitoring, see View log data sent to CloudWatch Logs.

A new log group is automatically created for the Aurora DB cluster under the following prefix. In this prefix, cluster-name represents the DB cluster name and log_type represents the log type.

/aws/rds/cluster/cluster-name/log_type

For example, suppose that you configure the export function to include the postgresql log for a DB cluster named my-db-cluster. In this case, PostgreSQL log data is stored in the /aws/rds/cluster/my-db-cluster/postgresql log group.

All of the events from all of the DB instances in a DB cluster are pushed to a log group using different log streams.

If a log group with the specified name exists, Aurora uses that log group to export log data for the Aurora DB cluster. You can use automated configuration, such as AWS CloudFormation, to create log groups with predefined log retention periods, metric filters, and customer access. Otherwise, a new log group is automatically created using the default log retention period, Never Expire, in CloudWatch Logs. You can use the CloudWatch Logs console, the AWS CLI, or the CloudWatch Logs API to change the log retention period. For more information about changing log retention periods in CloudWatch Logs, see Change log data retention in CloudWatch Logs.

You can use the CloudWatch Logs console, the AWS CLI, or the CloudWatch Logs API to search for information within the log events for a DB cluster. For more information about searching and filtering log data, see Searching and filtering log data.

Analyze Aurora PostgreSQL logs using CloudWatch Logs Insights

After publishing Aurora PostgreSQL logs to CloudWatch Logs, you can analyze logs graphically and create dashboards using CloudWatch Logs Insights.

To analyze Aurora PostgreSQL logs with CloudWatch Logs Insights

  1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.

  2. In the navigation pane, open Logs and choose Log insights.

  3. In Select log group(s), select the log group for your DB cluster.

    
                        Choose the Aurora PostgreSQL log group.
  4. In the query editor, delete the query that is currently shown, enter the following, and then choose Run query.

    ##Autovacuum execution time in seconds per 5 minute fields @message | parse @message "elapsed: * s" as @duration_sec | filter @message like / automatic vacuum / | display @duration_sec | sort @timestamp | stats avg(@duration_sec) as avg_duration_sec, max(@duration_sec) as max_duration_sec by bin(5 min)
    
                        Query in the query editor.
  5. Choose the Visualization tab.

    
                        The Visualization tab.
  6. Choose Add to dashboard.

  7. In Select a dashboard, either select a dashboard or enter a name to create a new dashboard.

  8. In Widget type, choose a widget type for your visualization.

    
                        The dashboard.
  9. (Optional) Add more widgets based on your log query results.

    1. Choose Add widget.

    2. Choose a widget type, such as Line.

      
                                Choose a widget.
    3. In the Add to this dashboard window, choose Logs.

      
                                Add logs to the dashboard.
    4. In Select log group(s), select the log group for your DB cluster.

    5. In the query editor, delete the query that is currently shown, enter the following, and then choose Run query.

      ##Autovacuum tuples statistics per 5 min fields @timestamp, @message | parse @message "tuples: " as @tuples_temp | parse @tuples_temp "* removed," as @tuples_removed | parse @tuples_temp "remain, * are dead but not yet removable, " as @tuples_not_removable | filter @message like / automatic vacuum / | sort @timestamp | stats avg(@tuples_removed) as avg_tuples_removed, avg(@tuples_not_removable) as avg_tuples_not_removable by bin(5 min)
      
                                Query in the query editor.
    6. Choose Create widget.

      Your dashboard should look similar to the following image.

      
                                Dashboard with two graphs.