Amazon Neptune
User Guide (API Version 2017-11-29)

Amazon Neptune Basic Operational Guidelines

The following are basic operational guidelines that you should follow when working with Neptune.

  • Monitor your CPU and memory usage. This helps you know when to migrate to a DB instance class with greater CPU or memory capacity to achieve the query performance that you require. You can set up Amazon CloudWatch to notify you when usage patterns change or when you approach the capacity of your deployment. Doing so can help you maintain system performance and availability.

    Because Neptune has its own memory manager, it is normal to see relatively low memory usage even when CPU usage is high. Encountering out-of-memory exceptions when executing queries is the best indicator that you need to increase freeable memory.

  • Enable automatic backups and set the backup window to occur at a convenient time.

  • Test failover for your DB instance to understand how long the process takes for your use case. It also helps ensure that the application that accesses your DB instance can automatically connect to the new DB instance after failover.

Amazon Neptune Security Best Practices

Use AWS Identity and Access Management (IAM) accounts to control access to Neptune API actions. Control actions that create, modify, or delete Neptune resources (such as DB instances, security groups, option groups, or parameter groups), and actions that perform common administrative actions (such as backing up and restoring DB instances).

  • Assign an individual IAM account to each person who manages Amazon Relational Database Service (Amazon RDS) resources. Don't use AWS account root users to manage Neptune resources. Create an IAM user for everyone, including yourself.

  • Grant each user the minimum set of permissions required to perform their duties.

  • Use IAM groups to effectively manage permissions for multiple users.

  • Rotate your IAM credentials regularly.

For more information about using IAM to access Neptune resources, see Security in Amazon Neptune. For general information about working with IAM, see AWS Identity and Access Management and IAM Best Practices in the IAM User Guide.

Limit the Number of Distinct Predicates You Use

Try to keep the number of distinct predicates you create as low as you can. If your data model contains a large numbers of distinct predicates, you may experience reduced performance and higher operational costs.

The way Neptune stores and accesses graph data assumes that the number of predicates you create will be small, on the order of tens or at most hundreds.

For a detailed explanation, see The Neptune Graph Data Model and How Statements Are Indexed in Neptune.

Best Practices for Using Neptune Metrics

To identify performance issues caused by insufficient resources and other common bottlenecks, you can monitor the metrics available for your Neptune DB cluster.

Monitor performance metrics on a regular basis to gather data about the average, maximum, and minimum values for a variety of time ranges. This helps identify when performance is degraded. Using this data, you can set Amazon CloudWatch alarms for particular metric thresholds so you are alerted if they are reached.

When you set up a new DB cluster and get it running with a typical workload, try to capture the average, maximum, and minimum values of all of the performance metrics at a number of different intervals (for example, one hour, 24 hours, one week, two weeks). This gives you an idea of what is normal. It helps to get comparisons for both peak and off-peak hours of operation. You can then use this information to identify when performance is dropping below standard levels, and can set alarms accordingly.

See Monitoring Neptune Using Amazon CloudWatch for information about how to view Neptune metrics.

The following are the most important metrics to start with:

  • CPU utilization — Percentage of computer processing capacity used. High values for CPU consumption might be appropriate, depending on your query-performance goals.

  • Freeable memory — How much RAM is available on the DB instance, in megabytes. Neptune has its own memory manager, so this metric may be lower than you expect. A good sign that you should consider upgrading your instance class to one with more RAM is if queries often throw out-of-memory exceptions.

The red line in the Monitoring tab metrics is marked at 75% for CPU and Memory Metrics. If instance memory consumption frequently crosses that line, check your workload and consider upgrading your instance to improve query performance.

Best Practices for Tuning Neptune Queries

One of the best ways to improve Neptune performance is to tune your most commonly used and most resource-intensive queries to make them less expensive to run.

For information about how to tune Gremlin queries, see Gremlin Query Hints. For information about how to tune SPARQL queries, see SPARQL Query Hints.

Load Balancing Across Read Replicas

You can load balance requests across read replicas by connecting to instance endpoints explicitly. Use the instance endpoints to direct requests to specific read replicas. You must perform any load balancing on the client side.

The read-only (ro) endpoint does not provide any load balancing.

Loading Faster Using a Temporary Larger Instance

Your load performance increases with larger instance sizes. If you're not using a large instance type, but you want increased load speeds, you can use a larger instance to load and then delete it.


The following procedure is for a new cluster. If you have an existing cluster, you can add a new larger instance and then promote it to a primary DB instance.

To load data using a larger instance size

  1. Create a cluster with a single r4.8xlarge instance. This instance is the primary DB instance.

  2. Create one or more read replicas with your desired instance size.

  3. Load your data using the Neptune loader. The load job runs on the primary DB instance.

  4. After the data is finished loading, delete the primary DB instance.

Retry Upload after Data Prefetch Task Interrupted Error

When you are loading data into Neptune using the bulk loader, a LOAD_FAILED status may occasionally result, with a PARSING_ERROR and Data prefetch task interrupted message reported in response to a request for detailed information, like this:

"errorLogs" : [ { "errorCode" : "PARSING_ERROR", "errorMessage" : "Data prefetch task interrupted: Data prefetch task for 11467 failed", "fileName" : "s3://some-source-bucket/some-source-file", "recordNum" : 0 } ]

If you encounter this error, just retry the bulk upload request again.

The error occurs when there was a temporary interruption that was typically not caused by your request or your data, and it can usually be resolved by running the bulk upload request again.

If you are using default settings, namely "mode":"AUTO", and "failOnError":"TRUE", the loader skips the files that it already successfully loaded and resumes loading files it had not yet loaded when the interruption occurred.