Using Aurora PostgreSQL with Data API in AWS AppSync
AWS AppSync provides a data source for executing SQL statements against Amazon Aurora clusters that are enabled with a Data API. You can use AWS AppSync resolvers to run SQL statements against the data API with GraphQL queries, mutations, and subscriptions.
Note
This tutorial uses the US-EAST-1
Region.
Creating clusters
Before adding an Amazon RDS data source to AWS AppSync, first enable a Data API on an Aurora Serverless cluster. You must also configure a secret using AWS Secrets Manager. To create an Aurora Serverless cluster, you can use the AWS CLI:
aws rds create-db-cluster \ --db-cluster-identifier appsync-tutorial \ --engine aurora-postgresql --engine-version 13.11 \ --engine-mode serverless \ --master-username USERNAME \ --master-user-password COMPLEX_PASSWORD
This will return an ARN for the cluster. You can check on the status of your cluster with the command:
aws rds describe-db-clusters \ --db-cluster-identifier appsync-tutorial \ --query "DBClusters[0].Status"
Create a Secret via the AWS Secrets Manager Console or the AWS CLI with an input file such as the
following using the USERNAME
and COMPLEX_PASSWORD
from the
previous step:
{ "username": "USERNAME", "password": "COMPLEX_PASSWORD" }
Pass this as a parameter to the CLI:
aws secretsmanager create-secret \ --name appsync-tutorial-rds-secret \ --secret-string file://creds.json
This will return an ARN for the secret. Take note of the ARN of your Aurora Serverless cluster and Secret for later when creating a data source in the AWS AppSync console.
Enabling data API
Once your cluster status changes to available
, enable the Data API by
following the Amazon RDS documentation.
The Data API must be enabled before adding it as an AWS AppSync data source. You can also
enable the Data API using the AWS CLI:
aws rds modify-db-cluster \ --db-cluster-identifier appsync-tutorial \ --enable-http-endpoint \ --apply-immediately
Creating the database and table
After enabling your Data API, validate it works using the aws rds-data
execute-statement
command in the AWS CLI. This ensures that your Aurora
Serverless cluster is configured properly before adding it to the AWS AppSync API. First,
create a TESTDB database with the --sql
parameter:
aws rds-data execute-statement \ --resource-arn "arn:aws:rds:us-east-1:123456789012:cluster:appsync-tutorial" \ --secret-arn "arn:aws:secretsmanager:us-east-1:123456789012:secret:appsync-tutorial-rds-secret" \ --sql "create DATABASE \"testdb\""
If this runs without any errors, add two tables with the create table
command:
aws rds-data execute-statement \ --resource-arn "arn:aws:rds:us-east-1:123456789012:cluster:appsync-tutorial" \ --secret-arn "arn:aws:secretsmanager:us-east-1:123456789012:secret:appsync-tutorial-rds-secret" \ --database "testdb" \ --sql 'create table public.todos (id serial constraint todos_pk primary key, description text not null, due date not null, "createdAt" timestamp default now());' aws rds-data execute-statement \ --resource-arn "arn:aws:rds:us-east-1:123456789012:cluster:appsync-tutorial" \ --secret-arn "arn:aws:secretsmanager:us-east-1:123456789012:secret:appsync-tutorial-rds-secret" \ --database "testdb" \ --sql 'create table public.tasks (id serial constraint tasks_pk primary key, description varchar, "todoId" integer not null constraint tasks_todos_id_fk references public.todos);'
If everything runs without issues, you can now add the cluster as a data source in your API.
Creating a GraphQL schema
Now that your Aurora Serverless Data API is running with configured tables, we'll create a GraphQL schema. You can do this manually, but AWS AppSync lets you quickly get started by importing table configuration from an existing database using the API creation wizard.
To begin:
-
In the AWS AppSync console, choose Create API, then Start with an Amazon Aurora cluster.
-
Specify API details like API name, then select your database to generate the API.
-
Choose your database. If needed, update the Region, then choose your Aurora cluster and TESTDB database.
-
Choose your Secret, then choose Import.
-
Once tables have been discovered, update the type names. Change
Todos
toTodo
andTasks
toTask
. -
Preview the generated schema by choosing Preview Schema. Your schema will look something like this:
type Todo { id: Int! description: String! due: AWSDate! createdAt: String } type Task { id: Int! todoId: Int! description: String }
-
For the role, you can either have AWS AppSync create a new role or create one with a policy similar to the one below:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "rds-data:ExecuteStatement", ], "Resource": [ "arn:aws:rds:us-east-1:123456789012:cluster:appsync-tutorial", "arn:aws:rds:us-east-1:123456789012:cluster:appsync-tutorial:*" ] }, { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue" ], "Resource": [ "arn:aws:secretsmanager:us-east-1:123456789012:secret:your:secret:arn:appsync-tutorial-rds-secret", "arn:aws:secretsmanager:us-east-1:123456789012:secret:your:secret:arn:appsync-tutorial-rds-secret:*" ] } ] }
Note that there are two statements in this policy to which you are granting role access. The first resource is your Aurora cluster and the second is your AWS Secrets Manager ARN.
Choose Next, review the configuration details, then choose Create API. You now have a fully operational API. You can review the full details of your API on the Schema page.
Resolvers for RDS
The API creation flow automatically created the resolvers to interact with our types. If you look at Schema page, you will find resolvers necessary to:
-
Create a
todo
via theMutation.createTodo
field. -
Update a
todo
via theMutation.updateTodo
field. -
Delete a
todo
via theMutation.deleteTodo
field. -
Get a single
todo
via theQuery.getTodo
field. -
List all
todos
via theQuery.listTodos
field.
You will find similar fields and resolvers attached for the Task
type.
Let's take a closer look at some of the resolvers.
Mutation.createTodo
From the schema editor in the AWS AppSync console, on the right side, choose
testdb
next to createTodo(...): Todo
. The resolver
code uses the insert
function from the rds
module to dynamically create an insert
statement that adds data to the todos
table. Because we are working with Postgres,
we can leverage the returning
statement to get the inserted data back.
Let's update the resolver to properly specify the DATE
type of the due
field:
import { util } from '@aws-appsync/utils'; import { insert, createPgStatement, toJsonObject, typeHint } from '@aws-appsync/utils/rds'; export function request(ctx) { const { input } = ctx.args; // if a due date is provided, cast is as `DATE` if (input.due) { input.due = typeHint.DATE(input.due) } const insertStatement = insert({ table: 'todos', values: input, returning: '*', }); return createPgStatement(insertStatement) } export function response(ctx) { const { error, result } = ctx; if (error) { return util.appendError( error.message, error.type, result ) } return toJsonObject(result)[0][0] }
Save the resolver. The type hint marks the due
properly in our input
object as a DATE
type. This allows the Postgres engine to properly
interpret the value. Next, update your schema to remove the id
from the
CreateTodo
input. Because our Postgres database can return the
generated ID, we can rely on it for creation and returning the result as a single
request:
input CreateTodoInput { due: AWSDate! createdAt: String description: String! }
Make the change and update your schema. Head to the Queries editor to add an item to the database:
mutation CreateTodo { createTodo(input: {description: "Hello World!", due: "2023-12-31"}) { id due description createdAt } }
You get the result:
{ "data": { "createTodo": { "id": 1, "due": "2023-12-31", "description": "Hello World!", "createdAt": "2023-11-14 20:47:11.875428" } } }
Query.listTodos
From the schema editor in the console, on the right side, choose
testdb
next to listTodos(id: ID!): Todo
. The request
handler uses the select utility function to build a request dynamically at run
time.
export function request(ctx) { const { filter = {}, limit = 100, nextToken } = ctx.args; const offset = nextToken ? +util.base64Decode(nextToken) : 0; const statement = select({ table: 'todos', columns: '*', limit, offset, where: filter, }); return createPgStatement(statement) }
We want to filter todos
based on the due
date. Let's
update the resolver to cast due
values to DATE
. Update the
list of imports and the request handler:
import { util } from '@aws-appsync/utils'; import * as rds from '@aws-appsync/utils/rds'; export function request(ctx) { const { filter: where = {}, limit = 100, nextToken } = ctx.args; const offset = nextToken ? +util.base64Decode(nextToken) : 0; // if `due` is used in a filter, CAST the values to DATE. if (where.due) { Object.entries(where.due).forEach(([k, v]) => { if (k === 'between') { where.due[k] = v.map((d) => rds.typeHint.DATE(d)); } else { where.due[k] = rds.typeHint.DATE(v); } }); } const statement = rds.select({ table: 'todos', columns: '*', limit, offset, where, }); return rds.createPgStatement(statement); } export function response(ctx) { const { args: { limit = 100, nextToken }, error, result, } = ctx; if (error) { return util.appendError(error.message, error.type, result); } const offset = nextToken ? +util.base64Decode(nextToken) : 0; const items = rds.toJsonObject(result)[0]; const endOfResults = items?.length < limit; const token = endOfResults ? null : util.base64Encode(`${offset + limit}`); return { items, nextToken: token }; }
Let's try out the query. In the Queries editor:
query LIST { listTodos(limit: 10, filter: {due: {between: ["2021-01-01", "2025-01-02"]}}) { items { id due description } } }
Mutation.updateTodo
You can also update
a Todo
. From the Queries editor, let's update our first Todo
item of id
1
.
mutation UPDATE { updateTodo(input: {id: 1, description: "edits"}) { description due id } }
Note that you must specify the id
of the item you are updating. You
can also specify a condition to only update an item that meets specific conditions.
For example, we may only want to edit the item if the description starts with
edits
:
mutation UPDATE { updateTodo(input: {id: 1, description: "edits: make a change"}, condition: {description: {beginsWith: "edits"}}) { description due id } }
Just like how we handled our create
and list
operations,
we can update our resolver to cast the due
field to a
DATE
. Save these changes to updateTodo
:
import { util } from '@aws-appsync/utils'; import * as rds from '@aws-appsync/utils/rds'; export function request(ctx) { const { input: { id, ...values }, condition = {}, } = ctx.args; const where = { ...condition, id: { eq: id } }; // if `due` is used in a condition, CAST the values to DATE. if (condition.due) { Object.entries(condition.due).forEach(([k, v]) => { if (k === 'between') { condition.due[k] = v.map((d) => rds.typeHint.DATE(d)); } else { condition.due[k] = rds.typeHint.DATE(v); } }); } // if a due date is provided, cast is as `DATE` if (values.due) { values.due = rds.typeHint.DATE(values.due); } const updateStatement = rds.update({ table: 'todos', values, where, returning: '*', }); return rds.createPgStatement(updateStatement); } export function response(ctx) { const { error, result } = ctx; if (error) { return util.appendError(error.message, error.type, result); } return rds.toJsonObject(result)[0][0]; }
Now try an update with a condition:
mutation UPDATE { updateTodo( input: { id: 1, description: "edits: make a change", due: "2023-12-12"}, condition: { description: {beginsWith: "edits"}, due: {ge: "2023-11-08"}}) { description due id } }
Mutation.deleteTodo
You can delete
a Todo
with the deleteTodo
mutation. This works like the updateTodo
mutation, and you must specify
the id
of the item you want to delete:
mutation DELETE { deleteTodo(input: {id: 1}) { description due id } }
Writing custom queries
We've used the rds
module utilities to create our SQL statements. We
can also write our own custom static statement to interact with our database. First,
update the schema to remove the id
field from the
CreateTask
input.
input CreateTaskInput { todoId: Int! description: String }
Next, create a couple of tasks. A task has a foreign key relationship with
Todo
:
mutation TASKS { a: createTask(input: {todoId: 2, description: "my first sub task"}) { id } b:createTask(input: {todoId: 2, description: "another sub task"}) { id } c: createTask(input: {todoId: 2, description: "a final sub task"}) { id } }
Create a new field in your Query
type called
getTodoAndTasks
:
getTodoAndTasks(id: Int!): Todo
Add a tasks
field to the Todo
type:
type Todo { due: AWSDate! id: Int! createdAt: String description: String! tasks:TaskConnection }
Save the schema. From the schema editor in the console, on the right side, choose
Attach Resolver for getTodosAndTasks(id:
Int!): Todo
. Choose your Amazon RDS data source. Update your resolver with the
following code:
import { sql, createPgStatement,toJsonObject } from '@aws-appsync/utils/rds'; export function request(ctx) { return createPgStatement( sql`SELECT * from todos where id = ${ctx.args.id}`, sql`SELECT * from tasks where "todoId" = ${ctx.args.id}`); } export function response(ctx) { const result = toJsonObject(ctx.result); const todo = result[0][0]; if (!todo) { return null; } todo.tasks = { items: result[1] }; return todo; }
In this code, we use the sql
tag template to write a SQL statement
that we can safely pass a dynamic value to at run time.
createPgStatement
can take up to two SQL requests at a time. We use
that to send one query for our todo
and another for our
tasks
. You could have done this with a JOIN
statement
or any other method for that matter. The idea is being able to write your own SQL
statement to implement your business logic. To use the query in the Queries editor, we can try this:
query TodoAndTasks { getTodosAndTasks(id: 2) { id due description tasks { items { id description } } } }
Deleting your cluster
Important
Deleting a cluster is permanent. Review your project thoroughly before carrying out this action.
To delete your cluster:
$ aws rds delete-db-cluster \ --db-cluster-identifier appsync-tutorial \ --skip-final-snapshot