Menu
Amazon API Gateway
Developer Guide

Create an API as an Amazon S3 Proxy

As an example to showcase using an API in API Gateway to proxy Amazon S3, this section describes how to create and configure an API to expose the following Amazon S3 operations:

Note

To integrate your API Gateway API with Amazon S3, you must choose a region where both the API Gateway and Amazon S3 services are available. For region availability, see Regions and Endpoints.

You may want to import the sample API as an Amazon S3 proxy, as shown in Swagger Definitions of the Sample API as an Amazon S3 Proxy. For instructions on how to import an API using the Swagger definition, see Import an API.

To use the API Gateway console to create the API, you must first sign up for an AWS account.

If you do not have an AWS account, use the following procedure to create one.

To sign up for AWS

  1. Open https://aws.amazon.com/ and choose Create an AWS Account.

  2. Follow the online instructions.

Set Up IAM Permissions for the API to Invoke Amazon S3 Actions

To allow the API to invoke required Amazon S3 actions, you must have appropriate IAM policies attached to an IAM role. The next section describes how to verify and to create, if necessary, the required IAM role and policies.

For your API to view or list Amazon S3 buckets and objects, you can use the IAM-provided AmazonS3ReadOnlyAccess policy in the IAM role. The ARN of this policy is arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess, which is as shown as follows:

Copy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:Get*", "s3:List*" ], "Resource": "*" } ] }

This policy document states that any of the Amazon S3 Get* and List* actions can be invoked on any of the Amazon S3 resources.

For your API to update Amazon S3 buckets and objects , you can use a custom policy for any of the Amazon S3 Put* actions as shown as follows:

Copy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:Put*", "Resource": "*" } ] }

For your API to work with Amazon S3 Get*, List* and Put* actions, you can add the above read-only and put-only policies to the IAM role.

For your API to invoke the Amazon S3 Post* actions, you must use an Allow policy for the s3:Post* actions in the IAM role. For a complete list of Amazon S3 actions, see Specifying Amazon S3 Permissions in a Policy.

For your API to create, view, update, and delete buckets and objects in Amazon S3, you can use the IAM -provided AmazonS3FullAccess policy in the IAM role. The ARN is arn:aws:iam::aws:policy/AmazonS3FullAccess.

Copy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": "*" } ] }

Having chosen the desired IAM policies to use, create an IAM role and attach to it the policies. The resulting IAM role must contain the following trust policy for API Gateway to assume this role at runtime.

Copy
{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "apigateway.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }

When using the IAM console to create the role, choose the Amazon API Gateway role type to ensure that this trust policy is automatically included.

Create API Resources to Represent Amazon S3 Resources

We will use the API's root (/ resource as the container of an authenticated caller's Amazon S3 buckets. We will also create a Folder and Item resources to represent a particular Amazon S3 bucket and a particular Amazon S3 object, respectively. The folder name and object key will be specified, in the form of path parameters as part of a request URL, by the caller.

To create an API resource that exposes the Amazon S3 service features

  1. In the API Gateway console, create an API named MyS3. This API's root resource (/) represents the Amazon S3 service.

  2. Under the API's root resource, create a child resource named Folder and set the required Resource Path as /{folder}.

  3. For the API's Folder resource, create an Item child resource. Set the required Resource Path as /{item}.

    
                                    Create an API in API Gateway as an Amazon S3 proxy

Expose an API Method to List the Caller's Amazon S3 Buckets

Getting the list of Amazon S3 buckets of the caller involves invoking the GET Service action on Amazon S3. On the API's root resource, (/), create the GET method. Configure the GET method to integrate with the Amazon S3, as follows.

To create and initialize the API's GET / method

  1. Choose Create method on the root node (/) from the Actions drop-down menu at the top-right corner of the Resources panel.

  2. Choose the GET from the drop-down list of HTTP verbs, and choose the check-mark icon to start creating the method.

    
                Create a method for integration with Amazon S3
  3. In the / - GET - Setup pane, choose AWS Service Proxy for the Integration type.

  4. From the list, choose an AWS Region.

  5. From AWS Service, choose S3.

  6. From HTTP method, choose GET.

  7. For Action Type, choose Use path override.

  8. (Optional) In Path override type /.

  9. Copy the previously created IAM role's ARN (from the IAM console) and paste it into Execution role.

    
                Set up a method for integration with Amazon S3
  10. Choose Save to finish setting up this method.

This setup integrates the frontend GET https://your-api-host/stage/ request with the backend GET https://your-s3-host/.

Note

After the initial setup, you can modify these settings in the Integration Request page of the method.

To control who can call this method of our API, we turn on the method authorization flag and set it to AWS_IAM.

To enable IAM to control access to the GET / method

  1. From the Method Execution, choose Method Request.

  2. Choose the pencil icon next to Authorization

  3. Choose AWS_IAM from the drop-down list.

  4. Choose the check-mark icon to save the setting.

    
                Declare method response types

For our API to return successful responses and exceptions properly to the caller, let us declare the 200, 400 and 500 responses in Method Response. We use the default mapping for 200 responses so that backend responses of the status code not declared here will be returned to the caller as 200 ones.

To declare response types for the GET / method

  1. From the Method Execution pane, choose the Method Response box. The API Gateway declares the 200 response by default.

  2. Choose Add response, enter 400 in the input text box, and choose the check-mark to finish the declaration.

  3. Repeat the above step to declare the 500 response type. The final setting is shown as follows:

    
                Declare method response types

Because the successful integration response from Amazon S3 returns the bucket list as an XML payload and the default method response from API Gateway returns a JSON payload, we must map the backend Content-Type header parameter value to the frontend counterpart. Otherwise, the client will receive application/json for the content type when the response body is actually an XML string. The following procedure shows how to set this up. In addition, we also want to display to the client other header parameters, such as Date and Content-Length.

To set up response header mappings for the GET / method

  1. In the API Gateway console, choose Method Response. Add the Content-Type header for the 200 response type.

    
                Declare method response headers
  2. In Integration Response, for Content-Type, type integration.response.header.Content-Type for the method response.

    
                Map integration response headers to method response headers

    With the above header mappings, API Gateway will translate the Date header from the backend to the Timestamp header for the client.

  3. Still in Integration Response, choose Add integration response, type an appropriate regular expression in the HTTP status regex text box for a remaining method response status. Repeat until all the method response status are covered.

    
                Set up integration response status codes

As a good practice, let us test our API we have configured so far.

Test the GET method on the API root resource

  1. Go back to Method Execution, choose Test from the Client box.

  2. Choose Test in the GET / - Method Test pane. An example result is shown as follows.

    
                Test API Root GET Bucket Result

Note

To use the API Gateway console to test the API as an Amazon S3 proxy, make sure that the targeted S3 bucket is from a different region from the API's region. Otherwise, you may get a 500 Internal Server Error response. This limitation does not apply to any deployed API.

Expose API Methods to Access an Amazon S3 Bucket

To work with an Amazon S3 bucket, we expose the GET, PUT, and DELETE methods on the /{folder} resource to list objects in a bucket, create a new bucket, and delete an existing bucket. The instructions are similar to those prescribed in Expose an API Method to List the Caller's Amazon S3 Buckets. In the following discussions, we outline the general tasks and highlight relevant differences.

To expose GET, PUT and DELETE methods on a folder resource

  1. On the /{folder} node from the Resources tree, create the DELETE, GET and PUT methods, one at a time.

  2. Set up the initial integration of each created method with its corresponding Amazon S3 endpoints. The following screen shot illustrates this setting for the PUT /{folder} method. For the DELETE /{folder} and GET /{folder} method, replace the PUT value of HTTP method by DELETE and GET, respectively.

    
                          Set up PUT /{folder} method

    Notice that we used the {bucket} path parameter in the Amazon S3 endpoint URLs to specify the bucket. We will need to map the {folder} path parameter of the method requests to the {bucket} path parameter of the integration requests.

  3. To map {folder} to {bucket}:

    1. Choose Method Execution and then Integration Request.

    2. Expand URL Path Parameters and choose Add path

    3. Type bucket in the Name column and method.request.path.folder in the Mapped from column. Choose the check-mark icon to save the mapping.

      
                              Set up PUT /{folder} method
  4. In Method Request, add the Content-Type to the HTTP Request Headers section.

    
                          Set up PUT /{folder} method

    This is mostly needed for testing, when using the API Gateway console, when you must specify application/xml for an XML payload.

  5. In Integration Request, set up the following header mappings, following the instructions described in Expose an API Method to List the Caller's Amazon S3 Buckets.

    
                        Set up header mappings for the PUT /{folder} method

    The x-amz-acl header is for specifying access control on the folder (or the corresponding Amazon S3 bucket). For more information, see Amazon S3 PUT Bucket Request. The Expect:'100-continue' header ensures that a request payload is submitted only when the request parameters are validated.

  6. To test the PUT method, choose Test in the Client box from Method Execution, and enter the following as input to the testing:

    1. In folder, type a bucket name,

    2. For the Content-Type header, type application/xml.

    3. In Request Body, provide the bucket region as the location constraint, declared in an XML fragment as the request payload. For example,

      Copy
      <CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <LocationConstraint>us-west-2</LocationConstraint> </CreateBucketConfiguration>

      
                              Test the PUT method to create an Amazon S3 bucket.
  7. Repeat the preceding steps to create and configure the GET and DELETE method on the API's /{folder} resource.

The above examples illustrate how to create a new bucket in the specified region, to view the list of objects in the bucket, and to delete the bucket. Other Amazon S3 bucket operations allow you work with the metadata or properties of the bucket. For example, you can set up your API to call the Amazon S3's PUT /?notification action to set up notifications on the bucket, to call PUT /?acl to set an access control list on the bucket, etc. The API set up is similar, except for that you must append appropriate query parameters to the Amazon S3 endpoint URLs. At run time, you must provide the appropriate XML payload to the method request. The same can be said about supporting the other GET and DELETE operations on a Amazon S3 bucket. For more information on possible &S3; actions on a bucket, see Amazon S3 Operations on Buckets.

Expose API Methods to Access an Amazon S3 Object in a Bucket

Amazon S3 supports GET, DELETE, HEAD, OPTIONS, POST and PUT actions to access and manage objects in a given bucket. For the complete list of supported actions, see Amazon S3 Operations on Objects.

In this tutorial, we expose the PUT Object operation, the GET Object operation, HEAD Object operation, and the DELETE Object operation through the API methods of PUT /{folder}/{item}, GET /{folder}/{item}, HEAD /{folder}/{item} and DELETE /{folder}/{item}, respectively.

The API setups for the PUT, GET and DELETE methods on /{folder}/{item} are the similar to those on /{folder}, as prescribed in Expose API Methods to Access an Amazon S3 Bucket. One major difference is that the additional path parameter of {object} is appended to the method request URL and this path parameter is mapped to the Amazon S3 endpoint URL path parameter of {item} in the backend.


                  Integrate the PUT method request on /{folder}/{item} resource with Amazon S3

The same is true for the GET and DELETE methods.

As an illustration, the following screen shot shows the output when testing the GET method on a {folder}/{item} resource using the API Gateway console. The request correctly returns the plain text of ("Welcome to README.txt") as the content of the specified file (README.txt) in the given Amazon S3 bucket (apig-demo).


                  Test API Folder/Item GET Bucket Result

Call the API Using a REST API Client

To provide an end-to-end tutorial, we now show how to call the API using Postman, which supports the AWS IAM authorization.

To call our Amazon S3 proxy API using Postman

  1. Deploy or redeploy the API. Make a note of the base URL of the API that is displayed next to Invoke URL at the top of the Stage Editor.

  2. Launch Postman.

  3. Choose Authorization and then choose AWS Signature. Type your IAM user's Access Key ID and Secret Access Key into the AccessKey and SecretKeyinput fields, respectively. Type the AWS region to which your API is deployed in the AWS Region text box. Type execute-api in the Service Name input field.

    
                      Configure Postman to support AWS authorization

    You can create a pair of the keys from the Security Credentials tab from your IAM user account in the IAM Management Console.

  4. To add a bucket named apig-demo-5 to your Amazon S3 account in the us-west-2 region:

    Note

    Be sure that the bucket name must be globally unique.

    1. Choose PUT from the drop-down method list and type the method URL (https://api-id.execute-api.aws-region.amazonaws.com/stage/folder-name

      
                          Set content type header to test API with postman
    2. Set the Content-Type header value as application/xml. You may need to delete any existing headers before setting the content type.

      
                          Set content type header to test API with postman
    3. Choose Body menu item and type the following XML fragment as the request body:

      Copy
      <CreateBucketConfiguration> <LocationConstraint>us-west-2</LocationConstraint> </CreateBucketConfiguration>

      
                          Set content type header to test API with postman
    4. Choose Send to submit the request. If successful, you should receive a 200 OK response with an empty payload.

  5. To add a text file to a bucket, follow the instructions above. If you specify a bucket name of apig-demo-5 for {folder} and a file name of Readme.txt for {item} in the URL and provide a text string of Hello, World! as the request payload, the request becomes

    Copy
    PUT /S3/apig-demo-5/Readme.txt HTTP/1.1 Host: 9gn28ca086.execute-api.us-east-1.amazonaws.com Content-Type: application/xml X-Amz-Date: 20161015T062647Z Authorization: AWS4-HMAC-SHA256 Credential=access-key-id/20161015/us-east-1/execute-api/aws4_request, SignedHeaders=content-length;content-type;host;x-amz-date, Signature=ccadb877bdb0d395ca38cc47e18a0d76bb5eaf17007d11e40bf6fb63d28c705b Cache-Control: no-cache Postman-Token: 6135d315-9cc4-8af8-1757-90871d00847e Hello, World!

    If everything goes well, you should receive a 200 OK response with an empty payload.

  6. To get the content of the Readme.txt file we just added to the apig-demo-5 bucket, do a GET request like the following one:

    Copy
    GET /S3/apig-demo-5/Readme.txt HTTP/1.1 Host: 9gn28ca086.execute-api.us-east-1.amazonaws.com Content-Type: application/xml X-Amz-Date: 20161015T063759Z Authorization: AWS4-HMAC-SHA256 Credential=access-key-id/20161015/us-east-1/execute-api/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=ba09b72b585acf0e578e6ad02555c00e24b420b59025bc7bb8d3f7aed1471339 Cache-Control: no-cache Postman-Token: d60fcb59-d335-52f7-0025-5bd96928098a

    If successful, you should receive a 200 OK response with the Hello, World! text string as the payload.

  7. To list items in the apig-demo-5 bucket, submit the following request:

    Copy
    GET /S3/apig-demo-5 HTTP/1.1 Host: 9gn28ca086.execute-api.us-east-1.amazonaws.com Content-Type: application/xml X-Amz-Date: 20161015T064324Z Authorization: AWS4-HMAC-SHA256 Credential=access-key-id/20161015/us-east-1/execute-api/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=4ac9bd4574a14e01568134fd16814534d9951649d3a22b3b0db9f1f5cd4dd0ac Cache-Control: no-cache Postman-Token: 9c43020a-966f-61e1-81af-4c49ad8d1392

    If successful, you should receive a 200 OK response with an XML payload showing a single item in the specified bucket, unless you added more files to the bucket before submitting this request.

    Copy
    <?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>apig-demo-5</Name> <Prefix></Prefix> <Marker></Marker> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>Readme.txt</Key> <LastModified>2016-10-15T06:26:48.000Z</LastModified> <ETag>"65a8e27d8879283831b664bd8b7f0ad4"</ETag> <Size>13</Size> <Owner> <ID>06e4b09e9d...603addd12ee</ID> <DisplayName>user-name</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult>

Note

To upload or download an image, you need to set content handling to CONVERT_TO_BINARY.