

# Amazon CloudWatch logging for AWS Transfer Family servers
<a name="structured-logging"></a>

Amazon CloudWatch is a powerful monitoring and observability service that provides comprehensive visibility into your AWS resources, including AWS Transfer Family.
+ Real-time monitoring: CloudWatch monitors Transfer Family resources and applications in real-time, allowing you to track and analyze their performance.
+ Metrics collection: CloudWatch collects and tracks various metrics for your resources and applications, which are variables you can measure and use for analysis.
+ CloudWatch home page: The CloudWatch home page automatically displays metrics about Transfer Family and other AWS services you use, providing a centralized view of your monitoring data.
+ Custom dashboards: You can create custom dashboards in CloudWatch to display metrics specific to your custom applications and the resources you choose to monitor.
+ Alarms and notifications: CloudWatch allows you to create alarms that monitor your metrics and trigger notifications or automated actions when certain thresholds are breached. This can be useful for monitoring file transfer activity in your Transfer Family servers and scaling resources accordingly.
+ Cost optimization: You can use the data collected by CloudWatch to identify under-utilized resources and take actions, such as stopping or deleting instances, to optimize your costs.

Overall, the comprehensive monitoring capabilities in CloudWatch make it a valuable tool for managing and optimizing your Transfer Family infrastructure and the applications running on it.

Details for CloudWatch logging for Transfer Family web apps is available in [CloudTrail logging for Transfer Family web apps](webapp-cloudtrail.md).

## Types of CloudWatch logging for Transfer Family
<a name="log-tf-types"></a>

Transfer Family provides two ways to log events to CloudWatch:
+ JSON structured logging
+ Logging via a logging role

For Transfer Family servers, you can choose the logging mechanism that you prefer. For connectors and workflows, only logging roles are supported.

**JSON structured logging**

For logging server events, we recommend using JSON structured logging. This provides a more comprehensive logging format that enables CloudWatch log querying. For this type of logging, the IAM policy for the user that creates the server (or edits the server's logging configuration) must contain the following permissions:
+ `logs:CreateLogDelivery`
+ `logs:DeleteLogDelivery`
+ `logs:DescribeLogGroups`
+ `logs:DescribeResourcePolicies`
+ `logs:GetLogDelivery`
+ `logs:ListLogDeliveries`
+ `logs:PutResourcePolicy`
+ `logs:UpdateLogDelivery`

The following is an example policy.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogDelivery",
                "logs:GetLogDelivery",
                "logs:UpdateLogDelivery",
                "logs:DeleteLogDelivery",
                "logs:ListLogDeliveries",
                "logs:PutResourcePolicy",
                "logs:DescribeResourcePolicies",
                "logs:DescribeLogGroups"                
            ],
            "Resource": "*"
        }
    ]
}
```

For details on setting up JSON structured logging, see [Creating, updating, and viewing logging for servers](log-server-manage.md).

**Logging role**

To log events for a managed workflow that is attached to a server, as well as for connectors, you need to specify a logging role. To set access, you create a resource-based IAM policy and an IAM role that provides that access information. The following is an example policy for an AWS account that can log server events.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:DescribeLogStreams",
                "logs:CreateLogGroup",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:log-group:/aws/transfer/*"
        }
    ]
}
```

For details on configuring a logging role to log workflow events see [Managing logging for workflows](cloudwatch-workflows.md).

# Creating, updating, and viewing logging for servers
<a name="log-server-manage"></a>

For all AWS Transfer Family servers, we provide structured logging. We recommend that you use structured logging for all new and existing Transfer Family servers. Benefits of using structured logging include the following:
+ Receive logs in a structured JSON format.
+ Query your logs with Amazon CloudWatch Logs Insights, which automatically discovers JSON formatted fields.
+ Share log groups across AWS Transfer Family resources allows you to combine log streams from multiple servers into a single log group, making it easier to manage your monitoring configurations and log retention settings.
+ Create aggregated metrics and visualizations that can be added to CloudWatch dashboards.
+ Track usage and performance data by using log groups to create consolidated log metrics, visualizations, and dashboards.

To enable logging for workflows that are attached to servers, you must use a logging role.

**Note**  
When you add a logging role, the logging group is always `/aws/transfer/your-serverID`, and can't be changed. This means, that unless you are sending your structured server logs to the same group, you will be logging to two separate logging groups.  
If you know that you are going to associate a workflow with your server, and thus need to add a logging role, you can set up structured logging to log to the default log group of `/aws/transfer/your-serverID`.  
To modify your logging group, see [StructuredLogDestinations](https://docs.aws.amazon.com/transfer/latest/APIReference/API_UpdateServer.html#TransferFamily-UpdateServer-request-StructuredLogDestinations) in the *AWS Transfer Family API Reference*.

If you create a new server by using the Transfer Family console, logging is enabled by default. After you create the server, you can use the `UpdateServer` API operation to change your logging configuration. For details, see [StructuredLogDestinations](https://docs.aws.amazon.com/transfer/latest/APIReference/API_UpdateServer.html#TransferFamily-UpdateServer-request-StructuredLogDestinations).

Currently, for workflows, if you want logging enabled, you must specify a logging role:
+ If you associate a workflow with a server, using either the `CreateServer` or `UpdateServer` API operation, the system does not automatically create a logging role. If you want to log your workflow events, you need to explicitly attach a logging role to the server.
+ If you create a server using the Transfer Family console and you attach a workflow, logs are sent to a log group that contains the server ID in the name. The format is `/aws/transfer/server-id`, for example, `/aws/transfer/s-1111aaaa2222bbbb3`. The server logs can be sent to this same log group or a different one.

**Logging considerations for creating and editing servers in the console**
+ New servers created through the console only support structured JSON logging, unless a workflow is attached to the server.
+ *No logging* is not an option for new servers that you create in the console.
+ Existing servers can enable structured JSON logging through the console at any time.
+ Enabling structured JSON logging through the console disables the existing logging method, so as to not double charge customers. The exception is if a workflow is attached to the server.
+ If you enable structured JSON logging, you cannot later disable it through the console.
+ If you enable structured JSON logging, you can change the log group destination through the console at any time.
+ If you enable structured JSON logging, you cannot edit the logging role through the console if you have enabled both logging types through the API. The exception is if your server has a workflow attached. However, the logging role does continue to appear in **Additional details**.

**Logging considerations for creating and editing servers using the API or SDK**
+ If you create a new server through the API, you can configure either or both types of logging, or choose no logging.
+ For existing servers, enable and disable structured JSON logging at any time.
+ You can change the log group through the API at any time.
+ You can change the logging role through the API at any time.

**To enable structured logging, you must be logged into an account with the following permissions**
+ `logs:CreateLogDelivery`
+ `logs:DeleteLogDelivery`
+ `logs:DescribeLogGroups`
+ `logs:DescribeResourcePolicies`
+ `logs:GetLogDelivery`
+ `logs:ListLogDeliveries`
+ `logs:PutResourcePolicy`
+ `logs:UpdateLogDelivery`

An example policy is available in the section [Configure CloudWatch logging role](configure-cw-logging-role.md).

**Topics**
+ [Creating logging for servers](#log-server-create)
+ [Updating logging for a server](#log-server-update)
+ [Viewing the server configuration](#log-server-config)

## Creating logging for servers
<a name="log-server-create"></a>

When you create a new server, on the **Configure additional details** page, you can specify an existing log group, or create a new one.

![\[Logging pane for Configure additional details in the Create server wizard. Choose an existing log group is selected.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/logging-server-choose-existing-group.png)


If you choose **Create log group**, the CloudWatch console ([https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/)) opens to the **Create log group** page. For details, see [ Create a log group in CloudWatch Logs](https://docs.aws.amazon.com//AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#Create-Log-Group). 

## Updating logging for a server
<a name="log-server-update"></a>

The details for logging depend on the scenario for your update.

**Note**  
When you opt into structured JSON logging, there can be a delay, in rare cases, where Transfer Family stops logging in the old format, but takes some time to start logging in the new JSON format. This can result in events that don't get logged. There won’t be any service disruptions, but you should be careful transferring files during the first hour after changing your logging method, as logs could be dropped.

If you are editing an existing server, your options depend on the state of the server.
+ The server already has a logging role enabled, but does not have Structured JSON logging enabled.  
![\[Logging pane, showing an existing logging role.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/logging-server-choose-role.png)
+ The server does not have any logging enabled.  
![\[Logging pane if the server does not have any logging enabled.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/logging-server-edit-none.png)
+ The server already has Structured JSON logging enabled, but does not have a logging role specified.  
![\[Logging pane if the server does not already have logging enabled.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/logging-server-edit-add-json-02.png)
+ The server already has Structured JSON logging enabled, and also has a logging role specified.  
![\[Logging pane if the server has structured logging enabled and also has a logging role specified.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/logging-server-edit-both.png)

## Viewing the server configuration
<a name="log-server-config"></a>

The details for the server configuration page depend on your scenario:

Depending on your scenario, the server configuration page might look like one of the following examples:
+ No logging is enabled.  
![\[Logging configuration with no logging configured.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/logging-server-config-none.png)
+ Structured JSON logging is enabled.  
![\[Logging configuration with structured logging configured.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/logging-server-config-structured.png)
+ Logging role is enabled, but structured JSON logging is not enabled.  
![\[Logging configuration with a logging role configured.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/logging-server-config-legacy.png)
+ Both types of logging (logging role and structured JSON logging) are enabled.  
![\[Logging configuration with both types (logging role and structured JSON logging) of logging configured.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/logging-server-config-both.png)

# Managing logging for workflows
<a name="cloudwatch-workflows"></a>

CloudWatch provides consolidated auditing and logging for workflow progress and results. Additionally, AWS Transfer Family provides several metrics for workflows. You can view metrics for how many workflows executions started, completed successfully, and failed in the previous minute. All of the CloudWatch metrics for Transfer Family are described in [Using CloudWatch metrics for Transfer Family servers](metrics.md).

**View Amazon CloudWatch logs for workflows**

1. Open the Amazon CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the left navigation pane, choose **Logs**, then choose **Log groups**.

1. On the **Log groups** page, on the navigation bar, choose the correct Region for your AWS Transfer Family server.

1. Choose the log group that corresponds to your server.

   For example, if your server ID is `s-1234567890abcdef0`, your log group is `/aws/transfer/s-1234567890abcdef0`.

1. On the log group details page for your server, the most recent log streams are displayed. There are two log streams for the user that you are exploring: 
   + One for each Secure Shell (SSH) File Transfer Protocol (SFTP) session.
   + One for the workflow that is being executed for your server. The format for the log stream for the workflow is `username.workflowID.uniqueStreamSuffix`.

   For example, if your user is `mary-major`, you have the following log streams:

   ```
   mary-major-east.1234567890abcdef0
   mary.w-abcdef01234567890.021345abcdef6789
   ```
**Note**  
 The 16-digit alphanumeric identifiers listed in this example are fictitious. The values that you see in Amazon CloudWatch are different. 

The **Log events** page for `mary-major-usa-east.1234567890abcdef0` displays the details for each user session, and the `mary.w-abcdef01234567890.021345abcdef6789` log stream contains the details for the workflow. 

 The following is a sample log stream for `mary.w-abcdef01234567890.021345abcdef6789`, based on a workflow (`w-abcdef01234567890`) that contains a copy step. 

```
{
    "type": "ExecutionStarted",
    "details": {
        "input": {
            "initialFileLocation": {
                "bucket": "amzn-s3-demo-bucket",
                "key": "mary/workflowSteps2.json",
                "versionId": "version-id",
                "etag": "etag-id"
            }
        }
    },
    "workflowId":"w-abcdef01234567890",
    "executionId":"execution-id",
    "transferDetails": {
        "serverId":"s-server-id",
        "username":"mary",
        "sessionId":"session-id"
    }
},
{
    "type":"StepStarted",
    "details": {
        "input": {
            "fileLocation": {
                "backingStore":"S3",
                "bucket":"amzn-s3-demo-bucket",
                "key":"mary/workflowSteps2.json",
                "versionId":"version-id",
                "etag":"etag-id"
            }
        },
        "stepType":"COPY",
        "stepName":"copyToShared"
    },
    "workflowId":"w-abcdef01234567890",
    "executionId":"execution-id",
    "transferDetails": {
        "serverId":"s-server-id",
        "username":"mary",
        "sessionId":"session-id"
    }
},
{
    "type":"StepCompleted",
    "details":{
        "output":{},
        "stepType":"COPY",
        "stepName":"copyToShared"
    },
    "workflowId":"w-abcdef01234567890",
    "executionId":"execution-id",
    "transferDetails":{
        "serverId":"server-id",
        "username":"mary",
        "sessionId":"session-id"
    }
},
{
    "type":"ExecutionCompleted",
    "details": {},
    "workflowId":"w-abcdef01234567890",
    "executionId":"execution-id",
    "transferDetails":{
        "serverId":"s-server-id",
        "username":"mary",
        "sessionId":"session-id"
    }
}
```

# Configure CloudWatch logging role
<a name="configure-cw-logging-role"></a>

To set access, you create a resource-based IAM policy and an IAM role that provides that access information.

To enable Amazon CloudWatch logging, you start by creating an IAM policy that enables CloudWatch logging. You then create an IAM role and attach the policy to it. You can do this when you are [creating a server](getting-started.md#getting-started-server) or by [editing an existing server](edit-server-config.md). For more information about CloudWatch, see [What is Amazon CloudWatch?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) and [What is Amazon CloudWatch logs?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) in the *Amazon CloudWatch User Guide*.

Use the following example IAM policies to allow CloudWatch logging.

------
#### [ Use a logging role ]

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:DescribeLogStreams",
                "logs:CreateLogGroup",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:log-group:/aws/transfer/*"
        }
    ]
}
```

------
#### [ Use structured logging ]

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogDelivery",
                "logs:GetLogDelivery",
                "logs:UpdateLogDelivery",
                "logs:DeleteLogDelivery",
                "logs:ListLogDeliveries",
                "logs:PutResourcePolicy",
                "logs:DescribeResourcePolicies",
                "logs:DescribeLogGroups"                
            ],
            "Resource": "*"
        }
    ]
}
```

In the preceding example policy, for the **Resource**, replace the *region-id* and *AWS account* with your values. For example, **"Resource": "arn:aws::logs:us-east-1:111122223333:log-group:/aws/transfer/\$1"**

------

You then create a role and attach the CloudWatch Logs policy that you created.

**To create an IAM role and attach a policy**

1. In the navigation pane, choose **Roles**, and then choose **Create role**.

   On the **Create role** page, make sure that **AWS service** is chosen.

1. Choose **Transfer** from the service list, and then choose **Next: Permissions**. This establishes a trust relationship between AWS Transfer Family and the IAM role. Additionally, add `aws:SourceAccount` and `aws:SourceArn` condition keys to protect yourself against the *confused deputy* problem. See the following documentation for more details:
   + Procedure for establishing a trust relationship with AWS Transfer Family: [To establish a trust relationship](requirements-roles.md#establish-trust-transfer) 
   + Description for confused deputy problem: [the confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html)

1. In the **Attach permissions policies** section, locate and choose the CloudWatch Logs policy that you just created, and choose **Next: Tags**.

1. (Optional) Enter a key and value for a tag, and choose **Next: Review**.

1. On the **Review** page, enter a name and description for your new role, and then choose **Create role**.

1. To view the logs, choose the **Server ID** to open the server configuration page, and choose **View logs**. You are redirected to the CloudWatch console where you can see your log streams.

On the CloudWatch page for your server, you can see records of user authentication (success and failure), data uploads (`PUT` operations), and data downloads (`GET` operations).

# Viewing Transfer Family log streams
<a name="view-log-entries"></a>

**To view your Transfer Family server logs**

1. Navigate to the details page for a server.

1. Choose **View logs**. This opens Amazon CloudWatch.

1. The log group for your selected server is displayed.  
![\[\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/log-example-01.png)

1. You can select a log stream to display details and individual entries for the stream.
   + If there is a listing for **ERRORS**, you can choose it to view details for the latest errors for the server.  
![\[\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/log-example-errors.png)
   + Choose any other entry to see an example log stream.  
![\[\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/log-example-02.png)
   + If your server has a managed workflow associated with it, you can view logs for the workflow runs.
**Note**  
The format for the log stream for the workflow is `username.workflowId.uniqueStreamSuffix`. For example, **decrypt-user.w-a1111222233334444.aaaa1111bbbb2222** could be the name of a log stream for user **decrypt-user** and workflow **w-a1111222233334444**.   
![\[\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/log-example-workflow.png)

**Note**  
For any expanded log entry, you can copy the entry to the clipboard by choosing **Copy**. For more details about CloudWatch logs, see [Viewing log data](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#ViewingLogData).

## Creating Amazon CloudWatch alarms
<a name="monitoring-cloudwatch-examples"></a>

The following example shows how to create Amazon CloudWatch alarms using the AWS Transfer Family metric, `FilesIn`.

------
#### [ CDK ]

```
new cloudwatch.Metric({
  namespace: "AWS/Transfer",
  metricName: "FilesIn",
  dimensionsMap: { ServerId: "s-00000000000000000" },
  statistic: "Average",
  period: cdk.Duration.minutes(1),
}).createAlarm(this, "AWS/Transfer FilesIn", {
  threshold: 1000,
  evaluationPeriods: 10,
  datapointsToAlarm: 5,
  comparisonOperator: cloudwatch.ComparisonOperator.GREATER_THAN_OR_EQUAL_TO_THRESHOLD,
});
```

------
#### [ CloudFormation ]

```
Type: AWS::CloudWatch::Alarm
Properties:
  Namespace: AWS/Transfer
  MetricName: FilesIn
  Dimensions:
    - Name: ServerId
      Value: s-00000000000000000
  Statistic: Average
  Period: 60
  Threshold: 1000
  EvaluationPeriods: 10
  DatapointsToAlarm: 5
  ComparisonOperator: GreaterThanOrEqualToThreshold
```

------

## Logging Amazon S3 API operations to S3 access logs
<a name="monitoring-s3-access-logs"></a>

**Note**  
This section does not apply to Transfer Family web apps.

If you are [using Amazon S3 access logs to identify S3 requests](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-s3-access-logs-to-identify-requests.html) made on behalf of your file transfer users, `RoleSessionName` is used to display which IAM role was assumed to service the file transfers. It also displays additional information such as the user name, session id, and server-id used for the transfers. The format is `[AWS:Role Unique Identifier]/username.sessionid@server-id` and is contained in the Requester field. For example, the following are the contents for a sample Requester field from an S3 access log for a file that was copied to the S3 bucket.

`arn:aws:sts::AWS-Account-ID:assumed-role/IamRoleName/username.sessionid@server-id`

In the Requester field above, it shows the IAM Role called `IamRoleName`. For more information about IAM role unique identifiers, see [Unique identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids) in the *AWS Identity and Access Management User Guide*.

# Examples to limit confused deputy problem
<a name="cloudwatch-confused-deputy"></a>

The confused deputy problem is a security issue where an entity that doesn't have permission to perform an action can coerce a more-privileged entity to perform the action. In AWS, cross-service impersonation can result in the confused deputy problem. For more details, see [Cross-service confused deputy prevention](confused-deputy.md).

**Note**  
In the following examples, replace each *user input placeholder* with your own information.  
In these examples, you can remove the ARN details for a workflow if your server doesn't have any workflows attached to it.

The following example logging/invocation policy allows any server (and workflow) in the account to assume the role.

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowAllServersWithWorkflowAttached",
            "Effect": "Allow",
            "Principal": {
                "Service": "transfer.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "111122223333"
                },
                "ArnLike": {
                   "aws:SourceArn": [
                     "arn:aws:transfer:us-west-2:111122223333:server/*",
                     "arn:aws:transfer:us-west-2:111122223333:workflow/*"
                   ]
                }
            }
        }
    ]
}
```

The following example logging/invocation policy allows a specific server (and workflow) to assume the role.

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowSpecificServerWithWorkflowAttached",
            "Effect": "Allow",
            "Principal": {
                "Service": "transfer.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "111122223333"
                },
                "ArnEquals": {
                   "aws:SourceArn": [
                       "arn:aws:transfer:us-west-2:111122223333:server/server-id",
                       "arn:aws:transfer:us-west-2:111122223333:workflow/workflow-id"
                   ]
                }
            }
        }
    ]
}
```

# CloudWatch log structure for Transfer Family
<a name="cw-structure-logs"></a>

This topic describes the fields that are populated in Transfer Family logs: both for JSON structured log entries and legacy log entries.

**Topics**
+ [JSON structured logs for Transfer Family](#json-log-entries)
+ [Legacy logs for Transfer Family](#legacy-log-entries)

## JSON structured logs for Transfer Family
<a name="json-log-entries"></a>

The following table contains details for log entry fields for Transfer Family SFTP/FTP/FTPS actions, in the new JSON structured log format.


| Field | Description | Example entry | 
| --- |--- |--- |
| activity-type | The action by the user | The available activity types are as follows: `AUTH_FAILURE`, `CONNECTED`, `DISCONNECTED`, `ERROR`, `EXIT_REASON`, `CLOSE`, `CREATE_SYMLINK`, `DELETE`, `MKDIR`, `OPEN`, `PARTIAL_CLOSE`, `RENAME`, `RMDIR`, `SETSTAT`, `TLS_RESUME_FAILURE`. | 
| bytes-in | Number of bytes uploaded by the user | 29238420042 | 
| bytes-out | Number of bytes downloaded by the user | 23094032490328 | 
| ciphers | Specifies the SSH cipher negotiated for the connection (available ciphers are listed in [Cryptographic algorithms](security-policies.md#cryptographic-algorithms)) | aes256-gcm@openssh.com | 
| client | The user's client software | SSH-2.0-OpenSSH\$17.4 | 
| home-dir | The directory that the end user lands on when they connect to the endpoint if their home directory type is PATH: if they have a logical home directory, this value is always / | /user-home-bucket/test | 
| kex | Specifies the negotiated SSH key exchange (KEX) for the connection (available KEX are listed in [Cryptographic algorithms](security-policies.md#cryptographic-algorithms)) | diffie-hellman-group14-sha256 | 
| message | Provides more information related to the error | <string> | 
| method | The authentication method | publickey | 
| mode | Specifies how a client opens a file | CREATE \$1 TRUNCATE \$1 WRITE | 
| operation | The client operation on a file | OPEN \$1 CLOSE | 
| path | Actual file path affected | /amzn-s3-demo-bucket/test-file-1.pdf  | 
| ssh-public-key | The public key body for the user that is connecting | AAAAC3NzaC1lZDI1NTE5AAAAIA9OY0qV6XYVHaaOiWAcj2spDJVbgjrqDPY4pxd6GnHl | 
| ssh-public-key-fingerprint | The public key fingerprint, as shown in the console for service-managed users when listing their user keys.  In the console, the fingerprint is displayed with the padding characters (if any): from 0 to 3 equal signs (=) at the end. In the log entry, this padding is stripped from the output.  | SHA256:BY3gNMHwTfjd4n2VuT4pTyLOk82zWZj4KEYEu7y4r/0 | 
| ssh-public-key-type | Type of public key: Transfer Family supports RSA-, ECDSA-, and ED25519-formatted keys | ssh-ed25519 | 
| resource-arn | A system-assigned, unique identifier for a specific resource (for example, a server) |  arn:aws:transfer:ap-northeast-1:12346789012:server/s-1234567890akeu2js2  | 
| role | The IAM role of the user |  arn:aws:iam::0293883675:role/testuser-role  | 
| session-id | A system-assigned, unique identifier for a single session |  9ca9a0e1cec6ad9d  | 
| source-ip | Client IP address | 18.323.0.129 | 
| user | The end user's username | myname192 | 
| user-policy | The permissions specified for the end user: this field is populated if the user's policy is a session policy. | The JSON code for the session policy that is being used | 

## Legacy logs for Transfer Family
<a name="legacy-log-entries"></a>

The following table contains details for log entries for various Transfer Family actions.

**Note**  
 These entries are not in the new JSON structured log format.

The following table contains details for log entries for various Transfer Family actions, in the new JSON structured log format.




| Action | Corresponding logs within Amazon CloudWatch Logs | 
| --- | --- | 
| Authentication failures |  ERRORS AUTH\$1FAILURE Method=publickey User=lhr Message="RSA SHA256:Lfz3R2nmLY4raK\$1b7Rb1rSvUIbAE\$1a\$1Hxg0c7l1JIZ0" SourceIP=3.8.172.211   | 
| COPY/TAG/DELETE/DECRYPT workflow |  \$1"type":"StepStarted","details":\$1"input":\$1"fileLocation":\$1"backingStore":"EFS","filesystemId":"fs-12345678","path":"/lhr/regex.py"\$1\$1,"stepType":"TAG","stepName":"successful\$1tag\$1step"\$1,"workflowId":"w-1111aaaa2222bbbb3","executionId":"81234abcd-1234-efgh-5678-ijklmnopqr90","transferDetails":\$1"serverId":"s-1234abcd5678efghi","username":"lhr","sessionId":"1234567890abcdef0"\$1\$1  | 
| Custom step workflow |  \$1"type":"CustomStepInvoked","details":\$1"output":\$1"token":"MzM4Mjg5YWUtYTEzMy00YjIzLWI3OGMtYzU4OGI2ZjQyMzE5"\$1,"stepType":"CUSTOM","stepName":"efs-s3\$1copy\$12"\$1,"workflowId":"w-9283e49d33297c3f7","executionId":"1234abcd-1234-efgh-5678-ijklmnopqr90","transferDetails":\$1"serverId":"s-zzzz1111aaaa22223","username":"lhr","sessionId":"1234567890abcdef0"\$1\$1  | 
| Deletes |  lhr.33a8fb495ffb383b DELETE Path=/bucket/user/123.jpg  | 
| Downloads |  lhr.33a8fb495ffb383b OPEN Path=/bucket/user/123.jpg Mode=READ llhr.33a8fb495ffb383b CLOSE Path=/bucket/user/123.jpg BytesOut=3618546  | 
| Logins/Logouts |  user.914984e553bcddb6 CONNECTED SourceIP=1.22.111.222 User=lhr HomeDir=LOGICAL Client=SSH-2.0-OpenSSH\$17.4 Role=arn:aws::iam::123456789012:role/sftp-s3-access user.914984e553bcddb6 DISCONNECTED  | 
| Renames |  lhr.33a8fb495ffb383b RENAME Path=/bucket/user/lambo.png NewPath=/bucket/user/ferrari.png   | 
| Sample workflow error log |  \$1"type":"StepErrored","details":\$1"errorType":"BAD\$1REQUEST","errorMessage":"Cannot tag Efs file","stepType":"TAG","stepName":"successful\$1tag\$1step"\$1,"workflowId":"w-1234abcd5678efghi","executionId":"81234abcd-1234-efgh-5678-ijklmnopqr90","transferDetails":\$1"serverId":"s-1234abcd5678efghi","username":"lhr","sessionId":"1234567890abcdef0"\$1\$1   | 
| Symlinks |  lhr.eb49cf7b8651e6d5 CREATE\$1SYMLINK LinkPath=/fs-12345678/lhr/pqr.jpg TargetPath=abc.jpg   | 
| Uploads |  lhr.33a8fb495ffb383b OPEN Path=/bucket/user/123.jpg Mode=CREATE\$1TRUNCATE\$1WRITE lhr.33a8fb495ffb383b CLOSE Path=/bucket/user/123.jpg BytesIn=3618546  | 
| Workflows |  \$1"type":"ExecutionStarted","details":\$1"input":\$1"initialFileLocation":\$1"backingStore":"EFS","filesystemId":"fs-12345678","path":"/lhr/regex.py"\$1\$1\$1,"workflowId":"w-1111aaaa2222bbbb3","executionId":"1234abcd-1234-efgh-5678-ijklmnopqr90","transferDetails":\$1"serverId":"s-zzzz1111aaaa22223","username":"lhr","sessionId":"1234567890abcdef0"\$1\$1 \$1"type":"StepStarted","details":\$1"input":\$1"fileLocation":\$1"backingStore":"EFS","filesystemId":"fs-12345678","path":"/lhr/regex.py"\$1\$1,"stepType":"CUSTOM","stepName":"efs-s3\$1copy\$12"\$1,"workflowId":"w-9283e49d33297c3f7","executionId":"1234abcd-1234-efgh-5678-ijklmnopqr90","transferDetails":\$1"serverId":"s-18ca49dce5d842e0b","username":"lhr","sessionId":"1234567890abcdef0"\$1\$1  | 

# Example CloudWatch log entries
<a name="cw-example-logs"></a>

This topic presents example log entries.

**Topics**
+ [Example transfer sessions log entries](#session-log-examples)
+ [Example log entries for SFTP connectors](#example-sftp-connector-logs)
+ [Example log entries for VPC Lattice connectors](#example-vpc-lattice-connector-logs)
+ [Example log entries for Key exchange algorithm failures](#example-kex-logs)

## Example transfer sessions log entries
<a name="session-log-examples"></a>

In this example, an SFTP user connects to a Transfer Family server, uploads a file, then disconnects from the session.

The following log entry reflects an SFTP user connecting to a Transfer Family server.

```
{
   "role": "arn:aws:iam::500655546075:role/transfer-s3",
   "activity-type": "CONNECTED",
   "ciphers": "chacha20-poly1305@openssh.com,chacha20-poly1305@openssh.com",
   "client": "SSH-2.0-OpenSSH_7.4",
   "source-ip": "52.94.133.133",
   "resource-arn": "arn:aws:transfer:us-east-1:500655546075:server/s-3fe215d89f074ed2a",
   "home-dir": "/test/log-me",
   "ssh-public-key": "AAAAC3NzaC1lZDI1NTE5AAAAIA9OY0qV6XYVHaaOiWAcj2spDJVbgjrqDPY4pxd6GnHl",
   "ssh-public-key-fingerprint": "SHA256:BY3gNMHwTfjd4n2VuT4pTyLOk82zWZj4KEYEu7y4r/0",
   "ssh-public-key-type": "ssh-ed25519",
   "user": "log-me",
   "kex": "ecdh-sha2-nistp256",
   "session-id": "9ca9a0e1cec6ad9d"
}
```

The following log entry reflects the SFTP user uploading a file into their Amazon S3 bucket.

```
{
   "mode": "CREATE|TRUNCATE|WRITE",
   "path": "/test/log-me/config-file",
   "activity-type": "OPEN",
   "resource-arn": "arn:aws:transfer:us-east-1:500655546075:server/s-3fe215d89f074ed2a",
   "session-id": "9ca9a0e1cec6ad9d"
}
```

The following log entries reflect the SFTP user disconnecting from their SFTP session. First, the client closes the connection to the bucket, and then the client disconnects the SFTP session.

```
{
   "path": "/test/log-me/config-file",
   "activity-type": "CLOSE",
   "resource-arn": "arn:aws:transfer:us-east-1:500655546075:server/s-3fe215d89f074ed2a",
   "bytes-in": "121",
   "session-id": "9ca9a0e1cec6ad9d"
}

{
   "activity-type": "DISCONNECTED",
   "resource-arn": "arn:aws:transfer:us-east-1:500655546075:server/s-3fe215d89f074ed2a",
   "session-id": "9ca9a0e1cec6ad9d"
}
```

**Note**  
The available activity types are as follows: `AUTH_FAILURE`, `CONNECTED`, `DISCONNECTED`, `ERROR`, `EXIT_REASON`, `CLOSE`, `CREATE_SYMLINK`, `DELETE`, `MKDIR`, `OPEN`, `PARTIAL_CLOSE`, `RENAME`, `RMDIR`, `SETSTAT`, `TLS_RESUME_FAILURE`.

## Example log entries for SFTP connectors
<a name="example-sftp-connector-logs"></a>

This section contains example logs for both a successful and an unsuccessful transfer. Logs are generated to a log group named `/aws/transfer/connector-id`, where *connector-id* is the identifier for your SFTP connector. Log entries for SFTP connectors are generated when you run either a `StartFileTransfer` or `StartDirectoryListing` command.

This log entry is for a transfer that completed successfully.

```
{
    "operation": "RETRIEVE",
    "timestamp": "2023-10-25T16:33:27.373720Z",
    "connector-id": "connector-id",
    "transfer-id": "transfer-id",
    "file-transfer-id": "transfer-id/file-transfer-id",
    "url": "sftp://192.0.2.0",
    "file-path": "/remotebucket/remotefilepath",
    "status-code": "COMPLETED",
    "start-time": "2023-10-25T16:33:26.945481Z",
    "end-time": "2023-10-25T16:33:27.159823Z",
    "account-id": "480351544584",
    "connector-arn": "arn:aws:transfer:us-east-1:account-id:connector/connector-id",
    "local-directory-path": "/connectors-localbucket",
    "bytes": 514,
    "egress-type": "SERVICE_MANAGED"
}
```

This log entry is for a transfer that timed out, and thus was not completed successfully.

```
{
    "operation": "RETRIEVE",
    "timestamp": "2023-10-25T22:33:47.625703Z",
    "connector-id": "connector-id",
    "transfer-id": "transfer-id",
    "file-transfer-id": "transfer-id/file-transfer-id",
    "url": "sftp://192.0.2.0",
    "file-path": "/remotebucket/remotefilepath",
    "status-code": "FAILED",
    "failure-code": "TIMEOUT_ERROR",
    "failure-message": "Transfer request timeout.",
    "account-id": "480351544584",
    "connector-arn": "arn:aws:transfer:us-east-1:account-id:connector/connector-id",
    "local-directory-path": "/connectors-localbucket",
    "egress-type": "SERVICE_MANAGED"
}
```

This log entry is for a SEND operation that succeeds.

```
{
    "operation": "SEND",
    "timestamp": "2024-04-24T18:16:12.513207284Z",
    "connector-id": "connector-id",
    "transfer-id": "transfer-id",
    "file-transfer-id": "transfer-id/file-transfer-id",
    "url": "sftp://server-id.server.transfer.us-east-1.amazonaws.com",
    "file-path": "/amzn-s3-demo-bucket/my-test-folder/connector-metrics-us-east-1-2024-01-02.csv",
    "status-code": "COMPLETED",
    "start-time": "2024-04-24T18:16:12.295235884Z",
    "end-time": "2024-04-24T18:16:12.461840732Z",
    "account-id": "255443218509",
    "connector-arn": "arn:aws:transfer:us-east-1:account-id:connector/connector-id",
    "bytes": 275,
    "egress-type": "SERVICE_MANAGED"
}
```

Descriptions for some key fields in the previous log examples.
+ `timestamp` represents when the log is added to CloudWatch. `start-time` and `end-time` correspond to when the connector actually starts and finishes a transfer.
+ `transfer-id` is a unique identifier that is assigned for each `start-file-transfer` request. If the user passes multiple file paths in a single `start-file-transfer` API operation, all the files share the same `transfer-id`.
+ `file-transfer-id` is a unique value generated for each file transferred. Note that the initial portion of the `file-transfer-id` is the same as `transfer-id`.

## Example log entries for VPC Lattice connectors
<a name="example-vpc-lattice-connector-logs"></a>

This section contains example logs for VPC Lattice connectors. For VPC Lattice connectors, logs include additional fields that provide information about the connector configuration and network setup.

This log entry is for a VPC Lattice connector SEND operation that completed successfully.

```
{
  "operation": "SEND",
  "timestamp": "2025-09-05T14:20:19.577192454Z",
  "connector-id": "connector-id",
  "transfer-id": "transfer-id",
  "file-transfer-id": "transfer-id/file-transfer-id",
  "file-path": ""/amzn-s3-demo-bucket/my-test-folder/connector-vpc-lattice-us-east-1-2025-03-22.csv"",
  "status-code": "COMPLETED",
  "start-time": "2025-09-05T14:20:19.434072509Z",
  "end-time": "2025-09-05T14:20:19.481453346Z",
  "account-id": "account-id",
  "connector-arn": "arn:aws:transfer:us-east-1:account-id:connector/connector-id",
  "remote-directory-path": "/test-bucket/test-folder/",
  "bytes": 262,
  "egress-type": "VPC_LATTICE",
  "vpc-lattice-resource-configuration-arn": "arn:aws:vpc-lattice:us-east-1:account-id:resourceconfiguration/resource-configuration-arn-id,
  "vpc-lattice-port-number": 22
}
```

VPC Lattice connector logs include the following additional fields:
+ `egress-type` - Type of egress configuration for the connector
+ `vpc-lattice-resource-configuration-arn` - ARN of the VPC Lattice Resource Configuration that defines the target SFTP server location
+ `vpc-lattice-port-number` - Port number for connecting to the SFTP server through VPC Lattice

## Example log entries for Key exchange algorithm failures
<a name="example-kex-logs"></a>

This section contains example logs where the Key exchange algorithm (KEX) failed. These are examples from the **ERRORS** log stream for structured logs.

This log entry is an example where there is a host key type error.

```
{
    "activity-type": "KEX_FAILURE",
    "source-ip": "999.999.999.999",
    "resource-arn": "arn:aws:transfer:us-east-1:999999999999:server/s-999999999999999999",
    "message": "no matching host key type found",
    "kex": "ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519,ssh-rsa,ssh-dss"
}
```

This log entry is an example where there is a KEX mismatch.

```
{
    "activity-type": "KEX_FAILURE",
    "source-ip": "999.999.999.999",
    "resource-arn": "arn:aws:transfer:us-east-1:999999999999:server/s-999999999999999999",
    "message": "no matching key exchange method found",
    "kex": "diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group14-sha256"
}
```

# Using CloudWatch metrics for Transfer Family servers
<a name="metrics"></a>

**Note**  
 You can also get metrics for Transfer Family from within the Transfer Family console itself. For details, see [Monitoring usage in the console](monitor-usage-transfer-console.md) 

You can get information about your server using CloudWatch metrics. A *metric* represents a time-ordered set of data points that are published to CloudWatch. When using metrics, you must specify the Transfer Family namespace, metric name, and [dimension](#cw-dimensions). For more information about metrics, see [Metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Metric) in the *Amazon CloudWatch User Guide*.

 The following table describes the CloudWatch metrics for Transfer Family.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/transfer/latest/userguide/metrics.html)

## Transfer Family dimensions
<a name="cw-dimensions"></a>

A *dimension* is a name/value pair that is part of the identity of a metric. For more information about dimensions, see [Dimensions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Dimension) in the *Amazon CloudWatch User Guide*.

The following table describes the CloudWatch dimensions for Transfer Family.


| Dimension | Description | 
| --- | --- | 
| `ServerId` | The unique ID of the server. | 
| `ConnectorId` | The unique ID of the connector. Used for AS2, for `OutboundMessage` and `OutboundFailedMessage` | 

## Using AWS User Notifications with AWS Transfer Family
<a name="using-user-notifications"></a>

To get notified about AWS Transfer Family events, you can use [AWS User Notifications](https://docs.aws.amazon.com/notifications/latest/userguide/what-is.html) to set up various delivery channels. When an event matches a rule that you specify, you receive a notification. 

You can receive notifications for events through multiple channels, including email, [Amazon Q Developer in chat applications](https://docs.aws.amazon.com/chatbot/latest/adminguide/what-is.html) chat notifications, or [AWS Console Mobile Application](https://docs.aws.amazon.com/consolemobileapp/latest/userguide/what-is-consolemobileapp.html) push notifications. You can also see notifications in the [Console Notifications Center](https://console.aws.amazon.com/notifications/). User Notifications supports aggregation, which can reduce the number of notifications that you receive during specific events.

For more information, see the [Customize file delivery notifications using AWS Transfer Family managed workflows](https://aws.amazon.com/blogs/storage/customize-file-delivery-notifications-using-aws-transfer-family-managed-workflows/) blog post, and [What is AWS User Notifications?](https://docs.aws.amazon.com/notifications/latest/userguide/what-is.html) in the *AWS User Notifications User Guide*.

# Using queries to filter log entries
<a name="cw-queries"></a>

You can use CloudWatch queries to filter and identify log entries for Transfer Family. This section contains some examples.

1. Sign in to the AWS Management Console and open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. You can create queries or rules.
   + To create a **Logs Insights** query, choose **Logs Insights** from the left navigation panel, and then enter the details for your query.
   + To create a **Contributor Insights** rule, choose Insights > Contributor Insights from the left navigation panel and then enter the details for your rule.

1. Run the query or rule that you created.

**View the top authentication failure contributors**

In your structured logs, an authentication failure log entry looks similar to the following:

```
{
  "method":"password",
  "activity-type":"AUTH_FAILURE",
  "source-ip":"999.999.999.999",
  "resource-arn":"arn:aws:transfer:us-east-1:999999999999:server/s-0123456789abcdef",
  "message":"Invalid user name or password",
  "user":"exampleUser"
}
```

Run the following query to view the top contributors to authentication failures.

```
filter @logStream = 'ERRORS'
| filter `activity-type` = 'AUTH_FAILURE'
| stats count() as AuthFailures by user, method
| sort by AuthFailures desc
| limit 10
```

Rather than using **CloudWatch Logs Insights**, you can create a **CloudWatch Contributors Insights** rule to view authentication failures. Create a rule similar to the following.

```
{
    "AggregateOn": "Count",
    "Contribution": {
        "Filters": [
            {
                "Match": "$.activity-type",
                "In": [
                    "AUTH_FAILURE"
                ]
            }
        ],
        "Keys": [
            "$.user"
        ]
    },
    "LogFormat": "JSON",
    "Schema": {
        "Name": "CloudWatchLogRule",
        "Version": 1
    },
    "LogGroupARNs": [
        "arn:aws:logs:us-east-1:999999999999:log-group:/customer/structured_logs"
    ]
}
```

**View log entries where a file was opened**

In your structured logs, a file read log entry looks similar to the following:

```
{
  "mode":"READ",
  "path":"/fs-0df669c89d9bf7f45/avtester/example",
  "activity-type":"OPEN",
  "resource-arn":"arn:aws:transfer:us-east-1:999999999999:server/s-0123456789abcdef",
  "session-id":"0049cd844c7536c06a89"
}
```

Run the following query to view log entries that indicate a file was opened.

```
filter `activity-type` = 'OPEN'
| display @timestamp, @logStream, `session-id`, mode, path
```