Exporting files from a HealthLake data store - AWS HealthLake

Exporting files from a HealthLake data store

After you create a data store store and import data (or if you use preloaded sample data) you can export the data to an Amazon S3 bucket. To export data from your HealthLake data store, use the following operations.

  • Make an export request using the StartFHIRExportJob API operation using the AWS SDKs and HealthLake.

    • This operation only supports making a system-wide export request.

  • Make an export request using the export syntax using the HealthLake FHIR REST API.

    • This operation supports making system-wide, Patient, and Group export requests. You can also apply parameters to further filter the data in the export request.

Important

HealthLake SDK export requests using StartFHIRExportJob API operation and FHIR REST API export requests using StartFHIRExportJobWithPost API operation have separate IAM actions. Each IAM action, SDK export with StartFHIRExportJob and FHIR REST API export with StartFHIRExportJobWithPost, can have allow/deny permissions handled separately. If you want both SDK and FHIR REST API exports to be restricted, make sure to deny permissions for each IAM action.

Both of these operations only support exporting your files to an Amazon S3 (S3) bucket. All files from your HealthLake data store are exported as newline delimited JSON (.ndjson) files, where each line consists of a valid FHIR resource.

Both of these operations require a service role. In it, HealthLake must be defined as the service principal, and you must define an Amazon Simple Storage Service (S3) bucket of where you want to export your files. To learn more, see Setting up permissions for export jobs.

You can enqueue import or export jobs. These asynchronous import or export jobs are processed in a FIFO (First In First Out) manner. You can create, read, update, or delete FHIR resources while an import or export job is in progress.