Silo-model multi-tenancy
Some multi-tenant SaaS environments might require tenants' data to be deployed on fully separated resources because of compliance and regulatory requirements. In some cases, large customers require dedicated clusters to reduce noisy neighbor impact. In those situations, you can apply the silo model.
In the silo model, storage of tenant data is fully isolated from any other tenant data. All constructs that are used to represent the tenant's data are considered physically unique to that client, meaning that each tenant will generally have distinct storage, monitoring, and management. Each tenant will also have a separate AWS Key Management Service (AWS KMS) key for encryption. In Amazon Neptune, a silo is one cluster per tenant.
Cluster per tenant
You can implement a silo model with Neptune by having one tenant per cluster. The following diagram shows three tenants accessing an application microservice in a virtual private cloud (VPC), with a separate cluster for each tenant.

Each cluster has its individual endpoint to help ensure distinct access points for efficient data interaction and management. By placing each tenant in its own cluster, you create the well-defined boundary between tenants ensuring customers that their data is successfully isolated from other tenants' data. This isolation is also appealing for SaaS solutions that have strict regulatory and security constraints. Additionally, when each tenant having its own cluster you don't have to worry about noisy neighbor, where one tenant imposes a load that could adversely affect the experience of other tenants.
While the cluster-per-tenant silo model has advantages, it also introduces management and agility challenges. The distributed nature of this model makes it harder to aggregate and assess tenant activity and the operational health across all tenants. Deployment also becomes more challenging because setting up a new tenant now requires the provisioning of a separate cluster. Upgrading becomes more challenging in environments with a shared client layer when client upgrades and versions are tightly coupled to the database upgrade.
Neptune supports both serverless and provisioned clusters. Assess whether your application workload is better handled by serverless or provisioned instances. In general, if your workload has a constant level of demand, provisioned instances will be more cost effective. Serverless is optimized for demanding, highly variable workloads with heavy database usage for short periods of time followed by long periods of light activity or no activity.
When using a Neptune provisioned cluster per tenant, you must select an instance size that approximates the maximum load of your tenant's demand. This dependence on a server also has a cascading impact on the scaling efficiency and cost of your SaaS environment. While a goal of SaaS is to size dynamically based on actual tenant load, a Neptune provisioned cluster requires you to over-provision to account for heavier periods of usage and spikes in loads. Over-provisioning increases the cost per tenant. Additionally, as tenant usage changes over time, scaling up or scaling down the cluster must be applied separately for each tenant.
The Neptune team generally advises against a silo model because of the higher cost incurred by idle resources and the additional operational complexities. However, for highly regulated or sensitive workloads require this additional isolation, customers might be willing to pay the additional cost.
Implementation guidance for the silo model
To implement a cluster-per-tenant silo-isolation model, create AWS Identity and Access Management (IAM) data-access policies. These policies control access to tenants' Neptune clusters
by ensuring that tenants can access only the Neptune cluster containing their own data. Attach
the IAM policy for each tenant to an IAM role. The application microservice then uses the
IAM role to generate fine-grained temporary
credentials using the AssumeRole
method of AWS Security Token Service (AWS STS). These
credentials, which have access only to the Neptune cluster for that tenant, are used to
connect to the tenant's Neptune cluster.
The following code snippet shows a sample data-based IAM policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "neptune-db:ReadDataViaQuery", "neptune-db:WriteDataViaQuery" ], "Resource": "arn:aws:neptune-db:us-east-1:123456789012:tenant-1-cluster/*", "Condition": { "ArnEquals": { "aws:PrincipalArn": "arn:aws:iam::123456789012:role/tenant-role-1" } } } ] }
The code provides a sample tenant, tenant-1
, with read and write query access
to their respective Neptune cluster. The Condition
element ensures that only
the calling entity (the principal), which has assumed the tentant-1
IAM role
(tenant-role-1
), is allowed to access tenant-1
's Neptune
cluster.