AWS AppSync
AWS AppSync Developer Guide

Pipeline Resolvers

AWS AppSync executes resolvers on a GraphQL field. In some cases, applications require executing multiple operations to resolve a single GraphQL field. With pipeline resolvers, developers can now compose operations (called Functions) and execute them in sequence. Pipeline resolvers are useful for applications that, for instance, require performing an authorization check before fetching data for a field.

Anatomy of a pipeline resolver

A pipeline resolver is composed of a Before mapping template, an After mapping template, and a list of Functions. Each Function has a request and response mapping template that it executes against a Data Source. As a pipeline resolver delegates execution to a list of functions, it is therefore not linked to any data source. Unit resolvers and functions are primitives that execute operation against data sources. See the Resolver Mapping Template Overview for more information.

Before Mapping template

The request mapping template of a pipeline resolver or also called the Before step, allows to perform some preparation logic before executing the defined functions.

Functions list

The list of functions a pipeline resolver will run in sequence. The pipeline resolver request mapping template evaluated result is made available to the first function as $ctx.prev.result. Each function output is available to the next function as $ctx.prev.result.

After mapping template

The response mapping template of a pipeline resolver or also called the After step, allows to perform some final mapping logic from the output of the last function to the expected GraphQL field type. The output of the last function in the functions list is available in the pipeline resolver mapping template as $ctx.prev.result or $ctx.result.

Execution Flow

Given a pipeline resolver comprised of 2 functions, the list below represents the execution flow when the resolver is invoked:

  1. Pipeline resolver BEFORE mapping template

  2. Function 1: Function request mapping template

  3. Function 1: Data source invocation

  4. Function 1: Function response mapping template

  5. Function 2: Function request mapping template

  6. Function 2: Data source invocation

  7. Function 2: Function response mapping template

  8. Pipeline resolver AFTER mapping template

Pipeline resolver execution flow is unidirectional and defined statically on the resolver.

Useful Apache Velocity Template Language (VTL) Utilities

As the complexity of an application increases, VTL utilities and directives are here to facilitate development productivity. The following utilities can help you when you're working with pipeline resolvers.

$ctx.stash

The stash is a Map that is made available inside each resolver and function mapping template. The same stash instance lives through a single resolver execution. What this means is you can use the stash to pass arbitrary data across request and response mapping templates, and across functions in a pipeline resolver. The stash exposes the same methods as the Java Map data structure.

$ctx.prev.result

The $ctx.prev.result represents the result of the previous operation that was executed in the pipeline. If the previous operation was the pipeline resolver request mapping template, then $ctx.prev.result represents the output of the evaluation of the template and is made available to the first function in the chain. If the previous operation was the first function, then $ctx.prev.result represents the output of the first function and is made available to the second function in the pipeline. If the previous operation was the last function, then $ctx.prev.result represents the output of the first function and is made available to the pipeline resolver response mapping template.

#return(data: Object)

The #return(data: Object) directive comes handy if you need to return prematurely from any mapping template. #return(data: Object) is analogous to the return keyword in programming languages because it returns from the closest scoped block of logic. What this means is that using #return inside a resolver mapping template returns from the resolver. Using #return(data: Object) in a resolver mapping template sets data on the GraphQL field. Additionally, using #return(data: Object) from a function mapping template returns from the function and continues the execution to either the next function in the pipeline or the resolver response mapping template.

#return

Same as #return(data: Object) but null will be returned instead.

$util.error

The $util.error utility is useful to throw a field error. Using $util.error inside a function mapping template throws a field error immediately, which prevents subsequent functions from being executed. For more details and other $util.error signatures, visit the Resolver Mapping Template Utility Reference.

$util.appendError

The $util.appendError is similar to the $util.error(), with the major distinction that it doesn't interrupt the evaluation of the mapping template. Instead, it signals there was an error with the field, but allows the template to be evaluated and consequently return data. Using $util.appendError inside a function will not disrupt the execution flow of the pipeline. For more details and other $util.error signatures, visit the Resolver Mapping Template Utility Reference.

Create A Pipeline Resolver

In the AWS AppSync console, go to the Schema page.

Save the following schema:

schema { query: Query mutation: Mutation } type Mutation { signUp(input: Signup): User } type Query { getUser(id: ID!): User } input Signup { username: String! email: String! } type User { id: ID! username: String email: AWSEmail }

We are going to wire a pipeline resolver to the signUp field on the Mutation type. In the Mutation type on the right side, choose Attach resolver next to the signUp field. On the Create Resolver page, click on the Switch to Pipeline button. The page should now show 3 sections, a Before Mapping Template text area, a Functions section, and a After Mapping template text area.

Our pipeline resolver signs up a user by first validating the email address input and then saving the user in the system. We are going to encapsulate the email validation inside a validateEmail function, and the saving of the user inside a saveUser function. The validateEmail function executes first, and if the email is valid, then the saveUser function executes.

The execution flow will be as follow:

  1. Mutation.signUp resolver request mapping template

  2. validateEmail function

  3. saveUser function

  4. Mutation.signUp resolver response mapping template

Because we will probably reuse the validateEmail function in other resolvers on our API, we want to avoid accessing $ctx.args since these will change from one GraphQL field to another. Instead, we can use the $ctx.stash to store the email attribute from the signUp(input: Signup) input field argument.

BEFORE mapping template:

## store email input field into a generic email key $util.qr($ctx.stash.put("email", $ctx.args.input.email)) {}

The console provides a default passthrough AFTER mapping template that will we use:

$util.toJson($ctx.result)

Create A Function

From the pipeline resolver page, on the Functions section, click on Create Function. It is also possible to create functions without going through the resolver page, to do this, in the AWS AppSync console, go to the Functions page. Choose the Create Function button. Let's create a function that checks if an email is valid and comes from a specific domain. If the email is not valid, the function raises an error. Otherwise, it forwards whatever input it was given.

Select none data source on the function page, and fill in the validateEmail request mapping template:

#set($valid = $util.matches("^[a-zA-Z0-9_.+-]+@(?:(?:[a-zA-Z0-9-]+\.)?[a-zA-Z]+\.)?(myvaliddomain)\.com", $ctx.stash.email)) #if (!$valid) $util.error("$ctx.stash.email is not a valid email.") #end { "payload": { "email": "${ctx.stash.email}" } }

and response mapping template:

$util.toJson($ctx.result)

We just created our validateEmail function. Repeat these steps to create the saveUser function with the following request and response mapping templates. For the sake of simplicity we use a none data source and pretend the user has been saved in the system after the function executes.

Request mapping template:

## $ctx.prev.result contains the signup input values. We could have also ## used $ctx.args.input. { "payload": $util.toJson($ctx.prev.result) }

and response mapping template:

## an id is required so let's add a unique random identifier to the output $util.qr($ctx.result.put("id", $util.autoId())) $util.toJson($ctx.result)

We just created out saveUser function.

Adding a Function to a Pipeline Resolver

Our functions should have been automatically added to the pipeline resolver we just created. If you happened to have created the functions through the console Functions page, you can click on Add Function on the resolver page to attach them. Add both validateEmail and saveUser functions to the resolver. The validateEmail function should be placed before the saveUser function. As you add more functions you can use the up and down arrows to reorganize the order of execution of your functions.

Executing a Query

In the AWS AppSync console, go to the Queries page. Enter the following query:

mutation { signUp(input: { email: "nadia@myvaliddomain.com" username: "nadia" }) { id username } }

should return something like:

{ "data": { "signUp": { "id": "256b6cc2-4694-46f4-a55e-8cb14cc5d7fc", "username": "nadia" } } }

We have successfully signed up our user and validated the input email using a pipeline resolver. To follow a more complete tutorial focusing on pipeline resolvers, you can go to Tutorial: Pipeline Resolvers