JavaScript resolvers overview
AWS AppSync lets you respond to GraphQL requests by performing operations on your data sources. For each GraphQL field you wish to run a query, mutation, or subscription on, a resolver must be attached. Each resolver is configured with one or more functions that communicate with data sources.
Resolvers are the connectors between GraphQL and a data source. They tell AWS AppSync how to translate an
incoming GraphQL request into instructions for your backend data source, and how to translate the response from
that data source back into a GraphQL response. With AWS AppSync, you can write your resolver functions using
JavaScript with the APPSYNC_JS
runtime. For a complete list of features and functionality supported
by the APPSYNC_JS
runtime, see JavaScript runtime features for resolvers and
functions.
Anatomy of a JavaScript pipeline resolver
A pipeline resolver is composed of code that defines a request and response handler and a list of functions. Each function has a request and response handler that it executes against a data source. As a pipeline resolver delegates runs to a list of functions, it is therefore not linked to any data source. Unit resolvers (written in VTL) and functions are primitives that execute operation against data sources.
Pipeline resolver request handler
The request handler of a pipeline resolver (the before step) allows you to perform some preparation logic before running the defined functions.
Functions list
The list of functions a pipeline resolver will run in sequence. The pipeline resolver request handler
evaluation result is made available to the first function as ctx.prev.result
. Each function
evaluation result is available to the next function as ctx.prev.result
.
Pipeline resolver response handler
The response handler of a pipeline resolver (the after step), allows to perform some final logic from the
output of the last function to the expected GraphQL field type. The output of the last function in the
functions list is available in the pipeline resolver response handler as ctx.prev.result
or
ctx.result
.
Execution flow
Given a pipeline resolver comprised of two functions, the list below represents the execution flow when the resolver is invoked:
-
Pipeline resolver request handler (before step)
-
Function 1: Function request handler
-
Function 1: Data source invocation
-
Function 1: Function response handler
-
Function 2: Function request handler
-
Function 2: Data source invocation
-
Function 2: Function response handler
-
Pipeline resolver response handler (after step)
Pipeline resolver execution flow is unidirectional and defined statically on the resolver.
Useful APPSYNC_JS
runtime built-in
utilities
The following utilities can help you when you’re working with pipeline resolvers.
ctx.stash
The stash is an object that is made available inside each resolver and function request and response handler. The same stash instance lives through a single resolver run. This means that you can use the stash to pass arbitrary data across request and response handlers and across functions in a pipeline resolver. You can test the stash like a regular JavaScript object.
ctx.prev.result
The ctx.prev.result
represents the result of the previous operation that was executed in
the pipeline. If the previous operation was the pipeline resolver request handler, then
ctx.prev.result
represents the output of the evaluation of the template and is made
available to the first function in the chain. If the previous operation was the first function, then
ctx.prev.result
represents the output of the first function and is made available to the
second function in the pipeline. If the previous operation was the last function, then
ctx.prev.result
represents the output of the last function and is made available to the
pipeline resolver response handler.
util.error
The util.error
utility is useful to throw a field error. Using util.error
inside a function request or response handler throws a field error immediately, which prevents subsequent
functions from being executed. For more details and other util.error
signatures, visit
JavaScript
runtime features for resolvers and functions.
util.appendError
util.appendError
is similar to util.error()
, with the major distinction that
it doesn’t interrupt the evaluation of the handler. Instead, it signals there was an error with the
field, but allows the handler to be evaluated and consequently return data. Using
util.appendError
inside a function will not disrupt the execution flow of the pipeline.
For more details and other util.error
signatures, visit the JavaScript runtime features for
resolvers and functions.
Writing resolvers using the APPSYNC_JS runtime
AWS AppSync pipeline resolvers contain up to 10 functions that are executed in sequence to resolve a query, mutation, or subscription. Each function is associated with a data source and provides code that tells the AWS AppSync service how to read or write data from or to that data source. This code defines a request handler and a response handler. The request handler takes a context object as an argument and returns a request payload in the form of a JSON object used to call your data source. The response handler receives a payload back from the data source with the result of the executed request. The response handler translates this payload into an appropriate format (optionally) and returns it. In the example below, a function retrieves an item from an Amazon DynamoDB data source:
import { util } from '@aws-appsync/utils' /** * Request a single item with `id` from the attached DynamoDB table datasource * @param ctx the context object holds contextual information about the function invocation. */ export function request(ctx) { const { args: { id } } = ctx return { operation: 'GetItem', key: util.dynamodb.toMapValues({ id }) } } /** * Returns the result directly * @param ctx the context object holds contextual information about the function invocation. */ export function response(ctx) { return ctx.result }
A pipeline resolver also has a request and a response handler surrounding the run of the functions in the
pipeline: its request handler is run before the first function’s request, and its response handler is run after
the last function’s response. The resolver request handler can set up data to be used by functions in the
pipeline. The resolver response handler is responsible for returning data that maps to the GraphQL field output
type. In the example below, a resolver request handler, defines allowedGroups
; the data returned
should belong to one of these groups. This value can be used by the resolver’s functions to request data. The
resolver’s response handler conducts a final check and filters the result to make sure that only items that
belong to the allowed groups are returned.
import { util } from '@aws-appsync/utils'; /** * Called before the request function of the first AppSync function in the pipeline. * @param ctx the context object holds contextual information about the function invocation. */ export function request(ctx) { ctx.stash.allowedGroups = ['admin']; ctx.stash.startedAt = util.time.nowISO8601(); return {}; } /** * Called after the response function of the last AppSync function in the pipeline. * @param ctx the context object holds contextual information about the function invocation. */ export function response(ctx) { const result = []; for (const item of ctx.prev.result) { if (ctx.stash.allowedGroups.indexOf(item.group) > -1) result.push(item); } return result; }
AWS AppSync functions enable you to write common logic that you can reuse across multiple resolvers in
your schema. For example, you can have one AWS AppSync function called QUERY_ITEMS
that is
responsible for querying items from an Amazon DynamoDB data source. For resolvers that you'd like to query items
with, simply add the function to the resolver's pipeline and provide the query index to be used. The logic
doesn't have to be re-implemented.

Writing code
Suppose you wanted to attach a pipeline resolver on a field named getPost(id:ID!)
that returns
a Post
type from an Amazon DynamoDB data source with the following GraphQL query:
getPost(id:1){ id title content }
First, attach a simple resolver to Query.getPost
with the code below. This is an example of
simple resolver code. There is no logic defined in the request handler, and the response handler simply returns
the result of the last function.
/** * Invoked **before** the request handler of the first AppSync function in the pipeline. * The resolver `request` handler allows to perform some preparation logic * before executing the defined functions in your pipeline. * @param ctx the context object holds contextual information about the function invocation. */ export function request(ctx) { return {} } /** * Invoked **after** the response handler of the last AppSync function in the pipeline. * The resolver `response` handler allows to perform some final evaluation logic * from the output of the last function to the expected GraphQL field type. * @param ctx the context object holds contextual information about the function invocation. */ export function response(ctx) { return ctx.prev.result }
Next define function GET_ITEM
that retrieves a postitem from your data source,
import { util } from '@aws-appsync/utils' /** * Request a single item from the attached DynamoDB table datasource * @param ctx the context object holds contextual information about the function invocation. */ export function request(ctx) { const { id } = ctx.args; return { operation: 'GetItem', key: util.dynamodb.toMapValues({ id }) }; } /** * Returns the result * @param ctx the context object holds contextual information about the function invocation. */ export function response(ctx) { const { error, result } = ctx; if (error) { return util.appendError(error.message, error.type, result); } return ctx.result; }
If there is an error during the request, the function’s response handler appends an error that will be
returned to the calling client in the GraphQL response. Add the GET_ITEM
function to your resolver
functions list. When you execute the query, the GET_ITEM
function’s request handler creates a
DynamoDBGetItem
request using the id
as the key.
{ "operation" : "GetItem", "key" : { "id" : { "S" : "1" } } }
AWS AppSync uses the request to fetch the data from Amazon DynamoDB. Once the data is returned, it is handled by the
GET_ITEM
function’s response handler, which checks for errors and then returns the result.
{ "result" : { "id": 1, "title": "hello world", "content": "<long story>" } }
Finally, the resolver’s response handler returns the result directly.
Working with errors
If an error occurs in your function during a request, the error will be made available in your function
response handler in ctx.error
. You can append the error to your GraphQL response using the
util.appendError
utility. You can make the error available to other functions in the
pipeline by using the stash. See the example below:
/** * Returns the result * @param ctx the context object holds contextual information about the function invocation. */ export function response(ctx) { const { error, result } = ctx; if (error) { if (!ctx.stash.errors) ctx.stash.errors = [] ctx.stash.errors.push(ctx.error) return util.appendError(error.message, error.type, result); } return ctx.result; }
Utilities
AWS AppSync provides two libraries that aid in the development of resolvers with the APPSYNC_JS
runtime:
-
@aws-appsync/eslint-plugin
- Catches and fixes problems quickly during development. -
@aws-appsync/utils
- Provides type validation and autocompletion in code editors.
Configuring the eslint plugin
ESLint@aws-appsync/eslint-plugin
is an ESLint plugin that catches invalid syntax in your code when
leveraging the APPSYNC_JS
runtime. The plugin allows you to quickly get feedback about your
code during development without having to push your changes to the cloud.
@aws-appsync/eslint-plugin
provides two rule sets that you can use during development.
"plugin:@aws-appsync/base" configures a base set of rules that you can leverage in your project:
Rule | Description |
---|---|
no-async | Async processes and promises are not supported. |
no-await | Async processes and promises are not supported. |
no-classes | Classes are not supported. |
no-for | for is not supported (except for for-in and for-of ,
which are supported) |
no-continue | continue is not supported. |
no-generators | Generators are not supported. |
no-yield | yield is not supported. |
no-labels | Labels are not supported. |
no-this | this keyword is not supported. |
no-try | Try/catch structure is not supported. |
no-while | While loops are not supported. |
no-disallowed-unary-operators | ++ , -- , and ~ unary operators are not
allowed. |
no-disallowed-binary-operators | The following operators are not allowed:
|
no-promise | Async processes and promises are not supported. |
"plugin:@aws-appsync/recommended" provides some additional rules but also requires you to add TypeScript configurations to your project.
Rule | Description |
---|---|
no-recursion | Recursive function calls are not allowed. |
no-disallowed-methods | Some methods are not allowed. See the reference for a full set of supported built-in functions. |
no-function-passing | Passing functions as function arguments to functions is not allowed. |
no-function-reassign | Functions cannot be reassigned. |
no-function-return | Functions cannot be the return value of functions. |
To add the plugin to your project, follow the installation and usage steps at Getting Started
with ESLint
$ npm install @aws-appsync/eslint-plugin
In your .eslintrc.{js,yml,json}
file, add "plugin:@aws-appsync/base" or "plugin:@aws-appsync/recommended" to the extends
property. The snippet below is
a basic sample .eslintrc
configuration for JavaScript:
{ "extends": ["plugin:@aws-appsync/base"] }
To use the "plugin:@aws-appsync/recommended" rule set, install the required dependency:
$ npm install -D @typescript-eslint/parser
Then, create an .eslintrc.js
file:
module.exports = { "env": { "es2021": true, "node": true }, "extends": [ "eslint:recommended", "plugin:@typescript-eslint/recommended", *"plugin:@aws-appsync/recommended"* ], "overrides": [ ], "parser": "@typescript-eslint/parser", "parserOptions": { "ecmaVersion": "latest", "sourceType": "module", "project": "./tsconfig.json" // your project tsconfig file }, "plugins": [ "@typescript-eslint" ], "rules": { } }
Testing
You can use the EvaluateCode
API command to remotely test your resolver and function handlers
with mocked data before ever saving your code to a resolver or function. To get started with the command, make
sure you have added the appsync:evaluateMappingTemplate
permission to your policy. For
example:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "appsync:evaluateCode", "Resource": "arn:aws:appsync:<region>:<account>:*" } ] }
You can leverage the command by using the AWS
CLI
aws appsync evaluate-code \ --code file://code.js \ --function request \ --context file://context.json \ --runtime name=APPSYNC_JS,runtimeVersion=1.0.0
The response contains an evaluationResult
containing the payload returned by your handler. It
also contains a logs
object that holds the list of logs that were generated by your handler during
the evaluation. This makes it easy to debug your code execution and see information about your evaluation to
help troubleshoot. For example:
{ "evaluationResult": "{\"operation\":\"PutItem\",\"key\":{\"id\":{\"S\":\"record-id\"}},\"attributeValues\":{\"owner\":{\"S\":\"John doe\"},\"expectedVersion\":{\"N\":2},\"authorId\":{\"S\":\"Sammy Davis\"}}}", "logs": [ "INFO - code.js:5:3: \"current id\" \"record-id\"", "INFO - code.js:9:3: \"request evaluated\"" ] }
The evaluation result can be parsed as JSON, which gives:
{ "operation": "PutItem", "key": { "id": { "S": "record-id" } }, "attributeValues": { "owner": { "S": "John doe" }, "expectedVersion": { "N": 2 }, "authorId": { "S": "Sammy Davis" } } }
Using the SDK, you can easily incorporate tests from your test suite to validate your template's behavior.
Our example here uses the Jest Testing FrameworkJSON.parse
to retrieve JSON from the string response:
const AWS = require('aws-sdk') const fs = require('fs') const client = new AWS.AppSync({ region: 'us-east-2' }) const runtime = {name:'APPSYNC_JS',runtimeVersion:'1.0.0') test('request correctly calls DynamoDB', async () => { const code = fs.readFileSync('./code.js', 'utf8') const context = fs.readFileSync('./context.json', 'utf8') const contextJSON = JSON.parse(context) const response = await client.evaluateCode({ code, context, runtime, function: 'request' }).promise() const result = JSON.parse(response.evaluationResult) expect(result.key.id.S).toBeDefined() expect(result.attributeValues.firstname.S).toEqual(contextJSON.arguments.firstname) })
This yields the following result:
Ran all test suites. > jest PASS ./index.test.js ✓ request correctly calls DynamoDB (543 ms) Test Suites: 1 passed, 1 total Tests: 1 passed, 1 total Snapshots: 0 totalTime: 1.511 s, estimated 2 s
Migrating from VTL to JavaScript
AWS AppSync allows you to write your business logic for your resolvers and functions using VTL or JavaScript. With both languages, you write logic that instructs the AWS AppSync service on how to interact with your data sources. With VTL, you write mapping templates that must evaluate to a valid JSON-encoded string. With JavaScript, you write request and response handlers that return objects. You don't return a JSON-encoded string.
For example, take the following VTL mapping template to get an Amazon DynamoDB item:
{ "operation": "GetItem", "key": { "id": $util.dynamodb.toDynamoDBJson($ctx.args.id), } }
The utility $util.dynamodb.toDynamoDBJson
returns a JSON-encoded string. If
$ctx.args.id
is set to <id>
, the template evaluates to a valid JSON-encoded
string:
{ "operation": "GetItem", "key": { "id": {"S": "<id>"}, } }
When working with JavaScript, you do not need to print out raw JSON-encoded strings within your code, and
using a utility like toDynamoDBJson
is not needed. An equivalent example of the mapping template
above is:
import { util } from '@aws-appsync/utils'; export function request(ctx) { return { operation: 'GetItem', key: {id: util.dynamodb.toDynamoDB(ctx.args.id)} }; }
An alternative is to use util.dynamodb.toMapValues
, which is the recommended approach to handle
an object of values:
import { util } from '@aws-appsync/utils'; export function request(ctx) { return { operation: 'GetItem', key: util.dynamodb.toMapValues({ id: ctx.args.id }), }; }
This evaluates to:
{ "operation": "GetItem", "key": { "id": { "S": "<id>" } } }
As another example, take the following mapping template to put an item in an Amazon DynamoDB data source:
{ "operation" : "PutItem", "key" : { "id": $util.dynamodb.toDynamoDBJson($util.autoId()), }, "attributeValues" : $util.dynamodb.toMapValuesJson($ctx.args) }
When evaluated, this mapping template string must produce a valid JSON-encoded string. When using JavaScript, you return the request object directly:
import { util } from '@aws-appsync/utils'; export function request(ctx) { const { id = util.autoId(), ...values } = ctx.args; return { operation: 'PutItem', key: util.dynamodb.toMapValues({ id }), attributeValues: util.dynamodb.toMapValues(values), }; }
which evaluates to:
{ "operation": "PutItem", "key": { "id": { "S": "2bff3f05-ff8c-4ed8-92b4-767e29fc4e63" } }, "attributeValues": { "firstname": { "S": "Shaggy" }, "age": { "N": 4 } } }
Choosing between direct data source access and proxying via a Lambda data source
With AWS AppSync and the APPSYNC_JS
runtime, you can write your own code that implements your
custom business logic by using AWS AppSync functions to access your data sources. This makes it easy for you
to directly interact with data sources like Amazon DynamoDB, Aurora Serverless, OpenSearch Service, HTTP APIs, and
other AWS services without having to deploy additional computational services or infrastructure.
AWS AppSync also makes it easy to interact with an AWS Lambda function by configuring a Lambda data source.
Lambda data sources allow you to run complex business logic using AWS Lambda’s full set capabilities to resolve a
GraphQL request. In most cases, an AWS AppSync function directly connected to its target data source will
provide all of the functionality you need. In situations where you need to implement complex business logic
that is not supported by the APPSYNC_JS
runtime, you can use a Lambda data source as a proxy to
interact with your target data source.
Direct data source integration | Lambda data source as a proxy | |
Use case | AWS AppSync functions interact directly with API data sources. | AWS AppSync functions call Lambdas that interact with API data sources. |
Runtime | APPSYNC_JS (JavaScript) |
Any supported Lambda runtime |
Maximum size of code | 32,000 characters per AWS AppSync function | 50 MB (zipped, for direct upload) per Lambda |
External modules | Limited - APPSYNC_JS supported features only | Yes |
Call any AWS service | Yes - Using AWS AppSync HTTP datasource | Yes - Using AWS SDK |
Access to the request header | Yes | Yes |
Network access | No | Yes |
File system access | No | Yes |
Logging and metrics | Yes | Yes |
Build and test entirely within AppSync | Yes | No |
Cold start | No | No - With provisioned concurrency |
Auto-scaling | Yes - transparently by AWS AppSync | Yes - As configured in Lambda |
Pricing | No additional charge | Charged for Lambda usage |
AWS AppSync functions that integrate directly with their target data source are ideal for use cases like the following:
-
Interacting with Amazon DynamoDB, Aurora Serverless, and OpenSearch Service
-
Interacting with HTTP APIs and passing incoming headers
-
Interacting with AWS services using HTTP data sources (with AWS AppSync automatically signing requests with the provided data source role)
-
Implementing access control before accessing data sources
-
Implementing the filtering of retrieved data prior to fulfilling a request
-
Implementing simple orchestration with sequential execution of AWS AppSync functions in a resolver pipeline
-
Controlling caching and subscription connections in queries and mutations.
AWS AppSync functions that use a Lambda data source as a proxy are ideal for use cases like the following:
-
Using a language other than JavaScript or Velocity Template Language (VTL)
-
Adjusting and controlling CPU or memory to optimize performance
-
Importing third-party libraries or requiring unsupported features in
APPSYNC_JS
-
Making multiple network requests and/or getting file system access to fulfill a query
-
Batching requests using batching configuration.