Components - FHIR Works on AWS


Authentication mechanism

FHIR Works on AWS uses an Amazon Cognito user pool for Amazon API Gateway authentication. Once authenticated, Amazon Cognito provides a JSON Web Token (JWT) to the requestor that is provided with all subsequent API requests. If a valid JWT is not provided, the API request fails and returns a HTTP 403 Forbidden response.

After a request is authenticated in Amazon API Gateway, an AWS Lambda function provides further authorization logic; the groups claim in the JWT specifies which group the requestor is from. Based on the group, the requester has the following permissions:

  • Practitioner group member: permission to perform any operation on any resource.

  • Non-practitioner group member: permission to read all financial resources, such as Invoice or ExplanationOfBenefit.

  • Auditor group member: permission to read the Patient resource.

Resource modification and search mechanism

After a create, update or delete request has been authorized and is within the AWS Lambda function, the change is persisted to the resource-db-dev table in Amazon DynamoDB. This table is connected to a DynamoDB stream, which pushes changes to the Amazon OpenSearch Service domain. Amazon OpenSearch Service is used to support search functionality in FHIR Works on AWS. Refer to the capability statement for search parameters supported.

FHIR bundle mechanism

Bundles are a container for a collection of resources. This container can contain many requests for the FHIR server (such as a request that writes 10 resources at once instead of calling the FHIR server 10 separate times). These bundles can be handled as either batches or transactions, however FHIR Works on AWS only supports transactions. Transactions require all requests within the bundle to succeed or everything rolls back to the initial state prior to bundle submission.

With this transactional requirement, this solution supports locking on the Amazon DynamoDB table. To support locking in DynamoDB an additional status value is added to each resource to indicate if the bundle has completed the transaction.

FHIR binary resource mechanism

This solution supports unstructured data, which constitutes the binary FHIR resource and represents the data of a single raw artifact as digital content accessible in its native format. A binary resource can contain any content, whether text, image, pdf, zip archive, etc. They are stored in the fhirbinarybucket Amazon S3 bucket and are indexed via a binary resource entry in the Amazon DynamoDB table. Binary resources cannot be searched.

This solution handles binary resources by using Amazon S3 getPresignedUrl APIs and vending that URL to the requestor, to which they can then upload the file. The following workflow outlines the activities upon receipt of a CreateBinary request:

  • Amazon API Gateway authorizes the request and sends the request to AWS Lambda.

  • Lambda validates whether the user is in an appropriate group.

  • Lambda writes the Binary metadata to Amazon DynamoDB.

  • Lambda returns an Amazon S3 pre-signed URL allowing the customer to upload directly to Amazon S3.

  • The customer uses the pre-signed URL and uploads the file to Amazon S3.

This Amazon S3 pre-signed URL approach is outside of the FHIR specification, but corresponds with the Bulk Data Implementation Guide specification, which remains at the Standard for Trial Use 1 (STU1) level of maturity. Refer to HL7 balloting levels for more information about the STU level.

FHIR bulk data access mechanism

Bulk Export allows you to export your data from DDB to S3. We currently support the system-level export and group export. To test this feature on FHIR Works on AWS, you can make API requests using the Fhir.postman_collection.json file by following these steps:

  1. In the FHIR examples collection, under the Export folder, use GET System Export request to initiate an export request.

  2. In the response, check the Content-Location header field for the URL. The URL is in the <base-url>/$export/<jobId> format.

  3. To get the status of the export job, in the Export folder, use the GET System Job status request. Enter the job ID value from step 2 in that request.

  4. Check the response from GET System Job status. If the job is in progress, the response header displays the x-progress: in-progress field. Keep polling that URL every 10 seconds until the job is complete. Once the job is complete, you'll get a JSON body with the pre-signed S3 URLs of your exported data.

  5. Download the exported data using those URLs.


    { "transactionTime": "2021-03-29T16:49:00.819Z", "request": "$export?_outputFormat=ndjson&_since=1800-01-01T00%3A00%3A00.000Z&_type=Patient", "requiresAccessToken": false, "output": [ { "type": "Patient", "url": "" } ], "error": [] }

    To cancel an export job, use theCancel Export Job request in the Export folder located in the Postman collections.