JavaScript resolvers overview - AWS AppSync

JavaScript resolvers overview

AWS AppSync lets you respond to GraphQL requests by performing operations on your data sources. For each GraphQL field you wish to run a query, mutation, or subscription on, a resolver must be attached.

Resolvers are the connectors between GraphQL and a data source. They tell AWS AppSync how to translate an incoming GraphQL request into instructions for your backend data source and how to translate the response from that data source back into a GraphQL response. With AWS AppSync, you can write your resolvers using JavaScript and run them in the AWS AppSync (APPSYNC_JS) environment.

AWS AppSync allows you to write unit resolvers or pipeline resolvers composed of multiple AWS AppSync functions in a pipeline.

Supported runtime features

The AWS AppSync JavaScript runtime provides a subset of JavaScript libraries, utilities, and features. For a complete list of features and functionality supported by the APPSYNC_JS runtime, see JavaScript runtime features for resolvers and functions.

Unit resolvers

A unit resolver is composed of code that defines a request and response handler that are executed against a data source. The request handler takes a context object as an argument and returns the request payload used to call your data source. The response handler receives a payload back from the data source with the result of the executed request. The response handler transforms the payload into a GraphQL response to resolve the GraphQL field. In the example below, a resolver retrieves an item from an DynamoDB data source:

import * as ddb from '@aws-appsync/utils/dynamodb' export function request(ctx) { return ddb.get({ key: { id: ctx.args.id } }); } export const response = (ctx) => ctx.result;

Anatomy of a JavaScript pipeline resolver

A pipeline resolver is composed of code that defines a request and response handler and a list of functions. Each function has a request and response handler that it executes against a data source. As a pipeline resolver delegates runs to a list of functions, it is therefore not linked to any data source. Unit resolvers and functions are primitives that execute operation against data sources.

Pipeline resolver request handler

The request handler of a pipeline resolver (the before step) allows you to perform some preparation logic before running the defined functions.

Functions list

The list of functions a pipeline resolver will run in sequence. The pipeline resolver request handler evaluation result is made available to the first function as ctx.prev.result. Each function evaluation result is available to the next function as ctx.prev.result.

Pipeline resolver response handler

The response handler of a pipeline resolver allows you to perform some final logic from the output of the last function to the expected GraphQL field type. The output of the last function in the functions list is available in the pipeline resolver response handler as ctx.prev.result or ctx.result.

Execution flow

Given a pipeline resolver comprised of two functions, the list below represents the execution flow when the resolver is invoked:

  1. Pipeline resolver request handler

  2. Function 1: Function request handler

  3. Function 1: Data source invocation

  4. Function 1: Function response handler

  5. Function 2: Function request handler

  6. Function 2: Data source invocation

  7. Function 2: Function response handler

  8. Pipeline resolver response handler

Useful APPSYNC_JS runtime built-in utilities

The following utilities can help you when you’re working with pipeline resolvers.

ctx.stash

The stash is an object that is made available inside each resolver and function request and response handler. The same stash instance lives through a single resolver run. This means that you can use the stash to pass arbitrary data across request and response handlers and across functions in a pipeline resolver. You can test the stash like a regular JavaScript object.

ctx.prev.result

The ctx.prev.result represents the result of the previous operation that was executed in the pipeline. If the previous operation was the pipeline resolver request handler, then ctx.prev.result is made available to the first function in the chain. If the previous operation was the first function, then ctx.prev.result represents the output of the first function and is made available to the second function in the pipeline. If the previous operation was the last function, then ctx.prev.result represents the output of the last function and is made available to the pipeline resolver response handler.

util.error

The util.error utility is useful to throw a field error. Using util.error inside a function request or response handler throws a field error immediately, which prevents subsequent functions from being executed. For more details and other util.error signatures, visit JavaScript runtime features for resolvers and functions.

util.appendError

util.appendError is similar to util.error(), with the major distinction that it doesn’t interrupt the evaluation of the handler. Instead, it signals there was an error with the field, but allows the handler to be evaluated and consequently return data. Using util.appendError inside a function will not disrupt the execution flow of the pipeline. For more details and other util.error signatures, visit the JavaScript runtime features for resolvers and functions.

runtime.earlyReturn

The runtime.earlyReturn function allows you to prematurely return from any request function. Using runtime.earlyReturn inside of a resolver request handler will return from the resolver. Calling it from an AWS AppSync function request handler will return from the function and will continue the run to either the next function in the pipeline or the resolver response handler.

Writing pipeline resolvers

A pipeline resolver also has a request and a response handler surrounding the run of the functions in the pipeline: its request handler is run before the first function’s request, and its response handler is run after the last function’s response. The resolver request handler can set up data to be used by functions in the pipeline. The resolver response handler is responsible for returning data that maps to the GraphQL field output type. In the example below, a resolver request handler, defines allowedGroups; the data returned should belong to one of these groups. This value can be used by the resolver’s functions to request data. The resolver’s response handler conducts a final check and filters the result to make sure that only items that belong to the allowed groups are returned.

import { util } from '@aws-appsync/utils'; /** * Called before the request function of the first AppSync function in the pipeline. * @param ctx the context object holds contextual information about the function invocation. */ export function request(ctx) { ctx.stash.allowedGroups = ['admin']; ctx.stash.startedAt = util.time.nowISO8601(); return {}; } /** * Called after the response function of the last AppSync function in the pipeline. * @param ctx the context object holds contextual information about the function invocation. */ export function response(ctx) { const result = []; for (const item of ctx.prev.result) { if (ctx.stash.allowedGroups.indexOf(item.group) > -1) result.push(item); } return result; }

Writing AWS AppSync functions

AWS AppSync functions enable you to write common logic that you can reuse across multiple resolvers in your schema. For example, you can have one AWS AppSync function called QUERY_ITEMS that is responsible for querying items from an Amazon DynamoDB data source. For resolvers that you'd like to query items with, simply add the function to the resolver's pipeline and provide the query index to be used. The logic doesn't have to be re-implemented.

Writing code

Suppose you wanted to attach a pipeline resolver on a field named getPost(id:ID!) that returns a Post type from an Amazon DynamoDB data source with the following GraphQL query:

getPost(id:1){ id title content }

First, attach a simple resolver to Query.getPost with the code below. This is an example of simple resolver code. There is no logic defined in the request handler, and the response handler simply returns the result of the last function.

/** * Invoked **before** the request handler of the first AppSync function in the pipeline. * The resolver `request` handler allows to perform some preparation logic * before executing the defined functions in your pipeline. * @param ctx the context object holds contextual information about the function invocation. */ export function request(ctx) { return {} } /** * Invoked **after** the response handler of the last AppSync function in the pipeline. * The resolver `response` handler allows to perform some final evaluation logic * from the output of the last function to the expected GraphQL field type. * @param ctx the context object holds contextual information about the function invocation. */ export function response(ctx) { return ctx.prev.result }

Next, define function GET_ITEM that retrieves a postitem from your data source:

import { util } from '@aws-appsync/utils' import * as ddb from '@aws-appsync/utils/dynamodb' /** * Request a single item from the attached DynamoDB table datasource * @param ctx the context object holds contextual information about the function invocation. */ export function request(ctx) { const { id } = ctx.args return ddb.get({ key: { id } }) } /** * Returns the result * @param ctx the context object holds contextual information about the function invocation. */ export function response(ctx) { const { error, result } = ctx if (error) { return util.appendError(error.message, error.type, result) } return ctx.result }

If there is an error during the request, the function’s response handler appends an error that will be returned to the calling client in the GraphQL response. Add the GET_ITEM function to your resolver functions list. When you execute the query, the GET_ITEM function’s request handler uses the utils provided by AWS AppSync's DynamoDB module to create a DynamoDBGetItem request using the id as the key. ddb.get({ key: { id } }) generates the appropriate GetItem operation:

{ "operation" : "GetItem", "key" : { "id" : { "S" : "1" } } }

AWS AppSync uses the request to fetch the data from Amazon DynamoDB. Once the data is returned, it is handled by the GET_ITEM function’s response handler, which checks for errors and then returns the result.

{ "result" : { "id": 1, "title": "hello world", "content": "<long story>" } }

Finally, the resolver’s response handler returns the result directly.

Working with errors

If an error occurs in your function during a request, the error will be made available in your function response handler in ctx.error. You can append the error to your GraphQL response using the util.appendError utility. You can make the error available to other functions in the pipeline by using the stash. See the example below:

/** * Returns the result * @param ctx the context object holds contextual information about the function invocation. */ export function response(ctx) { const { error, result } = ctx; if (error) { if (!ctx.stash.errors) ctx.stash.errors = [] ctx.stash.errors.push(ctx.error) return util.appendError(error.message, error.type, result); } return ctx.result; }

Utilities

AWS AppSync provides two libraries that aid in the development of resolvers with the APPSYNC_JS runtime:

  • @aws-appsync/eslint-plugin - Catches and fixes problems quickly during development.

  • @aws-appsync/utils - Provides type validation and autocompletion in code editors.

Configuring the eslint plugin

ESLint is a tool that statically analyzes your code to quickly find problems. You can run ESLint as part of your continuous integration pipeline. @aws-appsync/eslint-plugin is an ESLint plugin that catches invalid syntax in your code when leveraging the APPSYNC_JS runtime. The plugin allows you to quickly get feedback about your code during development without having to push your changes to the cloud.

@aws-appsync/eslint-plugin provides two rule sets that you can use during development.

"plugin:@aws-appsync/base" configures a base set of rules that you can leverage in your project:

Rule Description
no-async Async processes and promises are not supported.
no-await Async processes and promises are not supported.
no-classes Classes are not supported.
no-for for is not supported (except for for-in and for-of, which are supported)
no-continue continue is not supported.
no-generators Generators are not supported.
no-yield yield is not supported.
no-labels Labels are not supported.
no-this this keyword is not supported.
no-try Try/catch structure is not supported.
no-while While loops are not supported.
no-disallowed-unary-operators ++, --, and ~ unary operators are not allowed.
no-disallowed-binary-operators The instanceof operator is not allowed.
no-promise Async processes and promises are not supported.

"plugin:@aws-appsync/recommended" provides some additional rules but also requires you to add TypeScript configurations to your project.

Rule Description
no-recursion Recursive function calls are not allowed.
no-disallowed-methods Some methods are not allowed. See the reference for a full set of supported built-in functions.
no-function-passing Passing functions as function arguments to functions is not allowed.
no-function-reassign Functions cannot be reassigned.
no-function-return Functions cannot be the return value of functions.

To add the plugin to your project, follow the installation and usage steps at Getting Started with ESLint. Then, install the plugin in your project using your project package manager (e.g., npm, yarn, or pnpm):

$ npm install @aws-appsync/eslint-plugin

In your .eslintrc.{js,yml,json} file, add "plugin:@aws-appsync/base" or "plugin:@aws-appsync/recommended" to the extends property. The snippet below is a basic sample .eslintrc configuration for JavaScript:

{ "extends": ["plugin:@aws-appsync/base"] }

To use the "plugin:@aws-appsync/recommended" rule set, install the required dependency:

$ npm install -D @typescript-eslint/parser

Then, create an .eslintrc.js file:

{ "parser": "@typescript-eslint/parser", "parserOptions": { "ecmaVersion": 2018, "project": "./tsconfig.json" }, "extends": ["plugin:@aws-appsync/recommended"] }

Bundling, TypeScript, and source maps

Leveraging libraries and bundling your code

In your resolver and function code, you can leverage both custom and external libraries so long as they comply with the APPSYNC_JS requirements. This makes it possible to reuse existing code in your application. To make use of libraries that are defined by multiple files, you must use a bundling tool, such as esbuild, to combine your code in a single file that can then be saved to your AWS AppSync resolver or function.

When bundling your code, keep the following in mind:

  • APPSYNC_JS only supports ECMAScript modules (ESM).

  • @aws-appsync/* modules are integrated into APPSYNC_JS and should not be bundled with your code.

  • The APPSYNC_JS runtime environment is similar to NodeJS in that code does not run in a browser environment.

  • You can include an optional source map. However, do not include the source content.

    To learn more about source maps, see Using source maps.

For example, to bundle your resolver code located at src/appsync/getPost.resolver.js, you can use the following esbuild CLI command:

$ esbuild --bundle \ --sourcemap=inline \ --sources-content=false \ --target=esnext \ --platform=node \ --format=esm \ --external:@aws-appsync/utils \ --outdir=out/appsync \ src/appsync/getPost.resolver.js

Building your code and working with TypeScript

TypeScript is a programming language developed by Microsoft that offers all of JavaScript’s features along with the TypeScript typing system. You can use TypeScript to write type-safe code and catch errors and bugs at build time before saving your code to AWS AppSync. The @aws-appsync/utils package is fully typed.

The APPSYNC_JS runtime doesn't support TypeScript directly. You must first transpile your TypeScript code to JavaScript code that the APPSYNC_JS runtime supports before saving your code to AWS AppSync. You can use TypeScript to write your code in your local integrated development environment (IDE), but note that you cannot create TypeScript code in the AWS AppSync console.

To get started, make sure you have TypeScript installed in your project. Then, configure your TypeScript transcompilation settings to work with the APPSYNC_JS runtime using TSConfig. Here’s an example of a basic tsconfig.json file that you can use:

// tsconfig.json { "compilerOptions": { "target": "esnext", "module": "esnext", "noEmit": true, "moduleResolution": "node", } }

You can then use a bundling tool like esbuild to compile and bundle your code. For example, given a project with your AWS AppSync code located at src/appsync, you can use the following command to compile and bundle your code:

$ esbuild --bundle \ --sourcemap=inline \ --sources-content=false \ --target=esnext \ --platform=node \ --format=esm \ --external:@aws-appsync/utils \ --outdir=out/appsync \ src/appsync/**/*.ts

Using Amplify codegen

You can use the Amplify CLI to generate the types for your schema. From the directory where your schema.graphql file is located, run the following command and review the prompts to configure your codegen:

$ npx @aws-amplify/cli codegen add

To regenerate your codegen under certain circumstances (e.g., when your schema is updated), run the following command:

$ npx @aws-amplify/cli codegen

You can then use the generated types in your resolver code. For example, given the following schema:

type Todo { id: ID! title: String! description: String } type Mutation { createTodo(title: String!, description: String): Todo } type Query { listTodos: Todo }

You could use the generated types in the following example AWS AppSync function:

import { Context, util } from '@aws-appsync/utils' import * as ddb from '@aws-appsync/utils/dynamodb' import { CreateTodoMutationVariables, Todo } from './API' // codegen export function request(ctx: Context<CreateTodoMutationVariables>) { ctx.args.description = ctx.args.description ?? 'created on ' + util.time.nowISO8601() return ddb.put<Todo>({ key: { id: util.autoId() }, item: ctx.args }) } export function response(ctx) { return ctx.result as Todo }

Using generics in TypeScript

You can use generics with several of the provided types. For example, the snippet below is a Todo type:

export type Todo = { __typename: "Todo", id: string, title: string, description?: string | null, };

You can write a resolver for a subscription that makes use of Todo. In your IDE, type definitions and auto-complete hints will guide you into properly using the toSubscriptionFilter transform utility:

import { util, Context, extensions } from '@aws-appsync/utils' import { Todo } from './API' export function request(ctx: Context) { return {} } export function response(ctx: Context) { const filter = util.transform.toSubscriptionFilter<Todo>({ title: { beginsWith: 'hello' }, description: { contains: 'created' }, }) extensions.setSubscriptionFilter(filter) return null }

Linting your bundles

You can automatically lint your bundles by importing the esbuild-plugin-eslint plugin. You can then enable it by providing a plugins value that enables eslint capabilities. Below is a snippet that uses the esbuild JavaScript API in a file called build.mjs:

/* eslint-disable */ import { build } from 'esbuild' import eslint from 'esbuild-plugin-eslint' import glob from 'glob' const files = await glob('src/**/*.ts') await build({ format: 'esm', target: 'esnext', platform: 'node', external: ['@aws-appsync/utils'], outdir: 'dist/', entryPoints: files, bundle: true, plugins: [eslint({ useEslintrc: true })], })

Using source maps

You can provide an inline source map (sourcemap) with your JavaScript code. Source maps are useful for when you bundle JavaScript or TypeScript code and want to see references to your input source files in your logs and runtime JavaScript error messages.

Your sourcemap must appear at the end of your code. It is defined by a single comment line that follows the following format:

//# sourceMappingURL=data:application/json;base64,<base64 encoded string>

Here's an example:

//# sourceMappingURL=data:application/json;base64,ewogICJ2ZXJzaW9uIjogMywKICAic291cmNlcyI6IFsibGliLmpzIiwgImNvZGUuanMiXSwKICAibWFwcGluZ3MiOiAiO0FBQU8sU0FBUyxRQUFRO0FBQ3RCLFNBQU87QUFDVDs7O0FDRE8sU0FBUyxRQUFRLEtBQUs7QUFDM0IsU0FBTyxNQUFNO0FBQ2Y7IiwKICAibmFtZXMiOiBbXQp9Cg==

Source maps can be created with esbuild. The example below shows you how to use the esbuild JavaScript API to include an inline source map when code is built and bundled:

/* eslint-disable */ import { build } from 'esbuild' import eslint from 'esbuild-plugin-eslint' import glob from 'glob' const files = await glob('src/**/*.ts') await build({ sourcemap: 'inline', sourcesContent: false, format: 'esm', target: 'esnext', platform: 'node', external: ['@aws-appsync/utils'], outdir: 'dist/', entryPoints: files, bundle: true, plugins: [eslint({ useEslintrc: true })], })

In particular, the sourcemap and sourcesContent options specify that a source map should be added in line at the end of each build but should not include the source content. As a convention, we recommend not including source content in your sourcemap. You can disable this in esbuild by setting sources-content to false.

To illustrate how source maps work, review the following example in which a resolver code references helper functions from a helper library. The code contains log statements in the resolver code and in the helper library:

./src/default.resolver.ts (your resolver)

import { Context } from '@aws-appsync/utils' import { hello, logit } from './helper' export function request(ctx: Context) { console.log('start >') logit('hello world', 42, true) console.log('< end') return 'test' } export function response(ctx: Context): boolean { hello() return ctx.prev.result }

.src/helper.ts (a helper file)

export const logit = (...rest: any[]) => { // a special logger console.log('[logger]', ...rest.map((r) => `<${r}>`)) } export const hello = () => { // This just returns a simple sentence, but it could do more. console.log('i just say hello..') }

When you build and bundle the resolver file, your resolver code will include an inline source map. When your resolver runs, the following entries appear in the CloudWatch logs:

Looking at the entries in the CloudWatch log, you'll notice that the functionality of the two files have been bundled together and are running concurrently. The original file name of each file is also clearly reflected in the logs.

Testing

You can use the EvaluateCode API command to remotely test your resolver and function handlers with mocked data before ever saving your code to a resolver or function. To get started with the command, make sure you have added the appsync:evaluatecode permission to your policy. For example:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "appsync:evaluateCode", "Resource": "arn:aws:appsync:<region>:<account>:*" } ] }

You can leverage the command by using the AWS CLI or AWS SDKs. For example, to test your code using the CLI, simply point to your file, provide a context, and specify the handler you want to evaluate:

aws appsync evaluate-code \ --code file://code.js \ --function request \ --context file://context.json \ --runtime name=APPSYNC_JS,runtimeVersion=1.0.0

The response contains an evaluationResult containing the payload returned by your handler. It also contains a logs object that holds the list of logs that were generated by your handler during the evaluation. This makes it easy to debug your code execution and see information about your evaluation to help troubleshoot. For example:

{ "evaluationResult": "{\"operation\":\"PutItem\",\"key\":{\"id\":{\"S\":\"record-id\"}},\"attributeValues\":{\"owner\":{\"S\":\"John doe\"},\"expectedVersion\":{\"N\":2},\"authorId\":{\"S\":\"Sammy Davis\"}}}", "logs": [ "INFO - code.js:5:3: \"current id\" \"record-id\"", "INFO - code.js:9:3: \"request evaluated\"" ] }

The evaluation result can be parsed as JSON, which gives:

{ "operation": "PutItem", "key": { "id": { "S": "record-id" } }, "attributeValues": { "owner": { "S": "John doe" }, "expectedVersion": { "N": 2 }, "authorId": { "S": "Sammy Davis" } } }

Using the SDK, you can easily incorporate tests from your test suite to validate your code's behavior. Our example here uses the Jest Testing Framework, but any testing suite works. The following snippet shows a hypothetical validation run. Note that we expect the evaluation response to be valid JSON, so we use JSON.parse to retrieve JSON from the string response:

const AWS = require('aws-sdk') const fs = require('fs') const client = new AWS.AppSync({ region: 'us-east-2' }) const runtime = {name:'APPSYNC_JS',runtimeVersion:'1.0.0') test('request correctly calls DynamoDB', async () => { const code = fs.readFileSync('./code.js', 'utf8') const context = fs.readFileSync('./context.json', 'utf8') const contextJSON = JSON.parse(context) const response = await client.evaluateCode({ code, context, runtime, function: 'request' }).promise() const result = JSON.parse(response.evaluationResult) expect(result.key.id.S).toBeDefined() expect(result.attributeValues.firstname.S).toEqual(contextJSON.arguments.firstname) })

This yields the following result:

Ran all test suites. > jest PASS ./index.test.js ✓ request correctly calls DynamoDB (543 ms) Test Suites: 1 passed, 1 total Tests: 1 passed, 1 total Snapshots: 0 totalTime: 1.511 s, estimated 2 s

Migrating from VTL to JavaScript

AWS AppSync allows you to write your business logic for your resolvers and functions using VTL or JavaScript. With both languages, you write logic that instructs the AWS AppSync service on how to interact with your data sources. With VTL, you write mapping templates that must evaluate to a valid JSON-encoded string. With JavaScript, you write request and response handlers that return objects. You don't return a JSON-encoded string.

For example, take the following VTL mapping template to get an Amazon DynamoDB item:

{ "operation": "GetItem", "key": { "id": $util.dynamodb.toDynamoDBJson($ctx.args.id), } }

The utility $util.dynamodb.toDynamoDBJson returns a JSON-encoded string. If $ctx.args.id is set to <id>, the template evaluates to a valid JSON-encoded string:

{ "operation": "GetItem", "key": { "id": {"S": "<id>"}, } }

When working with JavaScript, you do not need to print out raw JSON-encoded strings within your code, and using a utility like toDynamoDBJson is not needed. An equivalent example of the mapping template above is:

import { util } from '@aws-appsync/utils'; export function request(ctx) { return { operation: 'GetItem', key: {id: util.dynamodb.toDynamoDB(ctx.args.id)} }; }

An alternative is to use util.dynamodb.toMapValues, which is the recommended approach to handle an object of values:

import { util } from '@aws-appsync/utils'; export function request(ctx) { return { operation: 'GetItem', key: util.dynamodb.toMapValues({ id: ctx.args.id }), }; }

This evaluates to:

{ "operation": "GetItem", "key": { "id": { "S": "<id>" } } }
Note

We recommend using the DynamoDB module with DynamoDB data sources:

import * as ddb from '@aws-appsync/utils/dynamodb' export function request(ctx) { ddb.get({ key: { id: ctx.args.id } }) }

As another example, take the following mapping template to put an item in an Amazon DynamoDB data source:

{ "operation" : "PutItem", "key" : { "id": $util.dynamodb.toDynamoDBJson($util.autoId()), }, "attributeValues" : $util.dynamodb.toMapValuesJson($ctx.args) }

When evaluated, this mapping template string must produce a valid JSON-encoded string. When using JavaScript, your code returns the request object directly:

import { util } from '@aws-appsync/utils'; export function request(ctx) { const { id = util.autoId(), ...values } = ctx.args; return { operation: 'PutItem', key: util.dynamodb.toMapValues({ id }), attributeValues: util.dynamodb.toMapValues(values), }; }

which evaluates to:

{ "operation": "PutItem", "key": { "id": { "S": "2bff3f05-ff8c-4ed8-92b4-767e29fc4e63" } }, "attributeValues": { "firstname": { "S": "Shaggy" }, "age": { "N": 4 } } }
Note

We recommend using the DynamoDB module with DynamoDB data sources:

import { util } from '@aws-appsync/utils' import * as ddb from '@aws-appsync/utils/dynamodb' export function request(ctx) { const { id = util.autoId(), ...item } = ctx.args return ddb.put({ key: { id }, item }) }

Choosing between direct data source access and proxying via a Lambda data source

With AWS AppSync and the APPSYNC_JS runtime, you can write your own code that implements your custom business logic by using AWS AppSync functions to access your data sources. This makes it easy for you to directly interact with data sources like Amazon DynamoDB, Aurora Serverless, OpenSearch Service, HTTP APIs, and other AWS services without having to deploy additional computational services or infrastructure. AWS AppSync also makes it easy to interact with an AWS Lambda function by configuring a Lambda data source. Lambda data sources allow you to run complex business logic using AWS Lambda’s full set capabilities to resolve a GraphQL request. In most cases, an AWS AppSync function directly connected to its target data source will provide all of the functionality you need. In situations where you need to implement complex business logic that is not supported by the APPSYNC_JS runtime, you can use a Lambda data source as a proxy to interact with your target data source.

Direct data source integration Lambda data source as a proxy
Use case AWS AppSync functions interact directly with API data sources. AWS AppSync functions call Lambdas that interact with API data sources.
Runtime APPSYNC_JS (JavaScript) Any supported Lambda runtime
Maximum size of code 32,000 characters per AWS AppSync function 50 MB (zipped, for direct upload) per Lambda
External modules Limited - APPSYNC_JS supported features only Yes
Call any AWS service Yes - Using AWS AppSync HTTP datasource Yes - Using AWS SDK
Access to the request header Yes Yes
Network access No Yes
File system access No Yes
Logging and metrics Yes Yes
Build and test entirely within AppSync Yes No
Cold start No No - With provisioned concurrency
Auto-scaling Yes - transparently by AWS AppSync Yes - As configured in Lambda
Pricing No additional charge Charged for Lambda usage

AWS AppSync functions that integrate directly with their target data source are ideal for use cases like the following:

  • Interacting with Amazon DynamoDB, Aurora Serverless, and OpenSearch Service

  • Interacting with HTTP APIs and passing incoming headers

  • Interacting with AWS services using HTTP data sources (with AWS AppSync automatically signing requests with the provided data source role)

  • Implementing access control before accessing data sources

  • Implementing the filtering of retrieved data prior to fulfilling a request

  • Implementing simple orchestration with sequential execution of AWS AppSync functions in a resolver pipeline

  • Controlling caching and subscription connections in queries and mutations.

AWS AppSync functions that use a Lambda data source as a proxy are ideal for use cases like the following:

  • Using a language other than JavaScript or Velocity Template Language (VTL)

  • Adjusting and controlling CPU or memory to optimize performance

  • Importing third-party libraries or requiring unsupported features in APPSYNC_JS

  • Making multiple network requests and/or getting file system access to fulfill a query

  • Batching requests using batching configuration.