Collecting data from AWS services - Amazon Security Lake

Collecting data from AWS services

Amazon Security Lake can collect logs and events from the following natively-supported AWS services:

  • AWS CloudTrail management and data events (S3, Lambda)

  • Amazon Elastic Kubernetes Service (Amazon EKS) Audit Logs

  • Amazon Route 53 resolver query logs

  • AWS Security Hub findings

  • Amazon Virtual Private Cloud (Amazon VPC) Flow Logs

Security Lake automatically transforms this data into the Open Cybersecurity Schema Framework (OCSF) and Apache Parquet format.

Tip

To add one or more of the preceding services as a log source in Security Lake, you don't need to separately configure logging in these services, except CloudTrail management events. If you do have logging configured in these services, you don't need to change your logging configuration to add them as log sources in Security Lake. Security Lake pulls data directly from these services through an independent and duplicated stream of events.

Prerequisite: Verify permissions

To add an AWS service as a source in Security Lake, you must have the necessary permissions. Verify that the AWS Identity and Access Management (IAM) policy attached to the role that you use to add a source has permission to perform the following actions:

  • glue:CreateDatabase

  • glue:CreateTable

  • glue:GetDatabase

  • glue:GetTable

  • glue:UpdateTable

  • iam:CreateServiceLinkedRole

  • s3:GetObject

  • s3:PutObject

It is recommended for the role to have the following conditions and resource scope for the S3:getObject and s3:PutObject permissions.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowUpdatingSecurityLakeS3Buckets", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": "arn:aws:s3:::aws-security-data-lake*", "Condition": { "StringEquals": { "aws:ResourceAccount": "${aws:PrincipalAccount}" } } } ] }

These actions allow you to collect logs and events from the an AWS service and send them to the correct AWS Glue database and table.

If you use a AWS KMS key for server-side encryption of your data lake, you also need permission for kms:DescribeKey.

CloudTrail event logs

AWS CloudTrail provides you with a history of AWS API calls for your account, including API calls made using the AWS Management Console, the AWS SDKs, the command line tools, and certain AWS services. CloudTrail also allows you to identify which users and accounts called AWS APIs for services that support CloudTrail, the source IP address that the calls were made from, and when the calls occurred. For more information, see the AWS CloudTrail User Guide.

Security Lake can collect logs associated with CloudTrail management events and CloudTrail data events for S3 and Lambda. CloudTrail management events, S3 data events, and Lambda data events are three separate sources in Security Lake. As a result, they have different values for sourceName when you add one of these as an ingested log source. Management events, also known as control plane events, provide insight into management operations that are performed on resources in your AWS account. CloudTrail data events, also known as data plane operations, show the resource operations performed on or within resources in your AWS account. These operations are often high-volume activities.

To collect CloudTrail management events in Security Lake, you must have at least one CloudTrail multi-Region organization trail that collects read and write CloudTrail management events. Logging must be enabled for the trail. If you do have logging configured in the other services, you don't need to change your logging configuration to add them as log sources in Security Lake. Security Lake pulls data directly from these services through an independent and duplicated stream of events.

A multi-Region trail delivers log files from multiple Regions to a single Amazon Simple Storage Service (Amazon S3) bucket for a single AWS account. If you already have a multi-Region trail managed through CloudTrail console or AWS Control Tower, no further action is required.

When you add CloudTrail events as a source, Security Lake immediately starts collecting your CloudTrail event logs. It consumes CloudTrail management and data events directly from CloudTrail through an independent and duplicated stream of events.

Security Lake doesn't manage your CloudTrail events or affect your existing CloudTrail configurations. To manage access and retention of your CloudTrail events directly, you must use the CloudTrail service console or API. For more information, see Viewing events with CloudTrail Event history in the AWS CloudTrail User Guide.

The following list provides GitHub repository links to the mapping reference for how Security Lake normalizes CloudTrail events to OCSF.

GitHub OCSF repository for CloudTrail events

Amazon EKS Audit Logs

When you add Amazon EKS Audit Logs as a source, Security Lake starts collecting in-depth information about the activities performed on the Kubernetes resources running in your Elastic Kubernetes Service (EKS) clusters. EKS Audit Logs help you detect potentially suspicious activities in your EKS clusters within the Amazon Elastic Kubernetes Service.

Security Lake consumes EKS Audit Log events directly from the Amazon EKS control plane logging feature through an independent and duplicative stream of audit logs. This process does not require any additional set up or affect any existing Amazon EKS control plane logging configurations that you might have. For more information, see Amazon EKS control plane logging in the Amazon EKS User Guide.

For information about how Security Lake normalizes EKS Audit Logs events to OCSF, see the mapping reference in the GitHub OCSF repository for Amazon EKS Audit Logs events.

Route 53 resolver query logs

Route 53 resolver query logs track DNS queries made by resources within your Amazon Virtual Private Cloud (Amazon VPC). This helps you understand how your applications are operating and spot security threats.

When you add Route 53 resolver query logs as a source in Security Lake, Security Lake immediately starts collecting your resolver query logs directly from Route 53 through an independent and duplicated stream of events.

Security Lake doesn't manage your Route 53 logs or affect your existing resolver query logging configurations. To manage resolver query logs, you must use the Route 53 service console. For more information, see Managing Resolver query logging configurations in the Amazon Route 53 Developer Guide.

The following list provides GitHub repository links to the mapping reference for how Security Lake normalizes Route 53 logs to OCSF.

GitHub OCSF repository for Route 53 logs

Security Hub findings

Security Hub findings help you understand your security posture in AWS and let you check your environment against security industry standards and best practices. Security Hub collects findings from various sources, including integrations with other AWS services, third-party product integrations, and checks against Security Hub controls. Security Hub processes findings in a standard format called AWS Security Finding Format (ASFF).

When you add Security Hub findings as a source in Security Lake, Security Lake immediately starts collecting your findings directly from Security Hub through an independent and duplicated stream of events. Security Lake also transforms the findings from ASFF to the Open Cybersecurity Schema Framework (OCSF) (OCSF).

Security Lake doesn't manage your Security Hub findings or affect your Security Hub settings. To manage Security Hub findings, you must use the Security Hub service console, API, or AWS CLI. For more information, see Findings in AWS Security Hub in the AWS Security Hub User Guide .

The following list provides GitHub repository links to the mapping reference for how Security Lake normalizes Security Hub findings to OCSF.

GitHub OCSF repository for Security Hub findings

VPC Flow Logs

The VPC Flow Logs feature of Amazon VPC captures information about the IP traffic going to and from network interfaces within your environment.

When you add VPC Flow Logs as a source in Security Lake, Security Lake immediately starts collecting your VPC Flow Logs. It consumes VPC Flow Logs directly from Amazon VPC through an independent and duplicative stream of Flow Logs.

Security Lake doesn't manage your VPC Flow Logs or affect your Amazon VPC configurations. To manage your Flow Logs, you must use the Amazon VPC service console. For more information, see Work with Flow Logs in the Amazon VPC Developer Guide.

The following list provides GitHub repository links to the mapping reference for how Security Lake normalizes VPC Flow Logs to OCSF.

GitHub OCSF repository for VPC Flow Logs

Adding an AWS service as a source

After you add an AWS service as a source, Security Lake automatically starts collecting security logs and events from it. These instructions tell you how to add a natively-supported AWS service as a source in Security Lake. For instructions on adding a custom source, see Collecting data from custom sources.

Console
To add an AWS log source (console)
  1. Open the Security Lake console at https://console.aws.amazon.com/securitylake/.

  2. Choose Sources from the navigation pane.

  3. Select the AWS service that you want to collect data from, and choose Configure.

  4. In the Source settings section, enable the source and select the Version of data source that you want to use for data ingestion. By default, the latest version of data source is ingested by Security Lake.

    Important

    If you don't have the required role permissions to enable the new version of the AWS log source in the specified Region, contact your Security Lake administrator. For more information, see Update role permissions.

    For your subscribers to ingest the selected version of the data source, you must also update your subscriber settings. For the details on how to edit a subscriber, see Subscriber management in Amazon Security Lake.

    Optionally, you can choose to ingest the latest version only and disable all previous source versions used for data ingestion.

  5. In the Regions section, select the Regions in which you want to collect data for the source. Security Lake will collect data from the source from all accounts in the selected Regions.

  6. Choose Enable.

API

To add an AWS log source (API)

To add an AWS service as a source programmatically, use the CreateAwsLogSource operation of the Security Lake API. If you're using the AWS Command Line Interface (AWS CLI), run the create-aws-log-source command. The sourceName and regions parameters are required. Optionally, you can limit the scope of the source to specific accounts or a specific sourceVersion.

Important

When you don't provide a parameter in your command, Security Lake assumes that the missing parameter refers to the entire set. For example, if you don't provide the accounts parameter , the command applies to the entire set of accounts in your organization.

The following example adds VPC Flow Logs as a source in the designated accounts and Regions. This example is formatted for Linux, macOS, or Unix, and it uses the backslash (\) line-continuation character to improve readability.

Note

If you apply this request to a Region in which you haven't enabled Security Lake, you'll receive an error. You can resolve the error by enabling Security Lake in that Region or by using the regions parameter to specify only those Regions in which you've enabled Security Lake.

$ aws securitylake create-aws-log-source \ --sources sourceName=VPC_FLOW,accounts='["123456789012", "111122223333"]',regions=["us-east-2"],sourceVersion="1.0"

Updating role permissions

If you don't have the required role permissions or resources—new AWS Lambda function and Amazon Simple Queue Service (Amazon SQS) queue—to ingest data from a new version of the data source, you must update your AmazonSecurityLakeMetaStoreManagerV2 role permissions and create a new set of resources to process data from your sources.

Choose your preferred method, and follow the instructions to update your role permissions and create new resources to process data from a new version of an AWS log source in a specified Region. This is a one-time action, as the permissions and resources are automatically applied to future data source releases.

Console
To update role permissions (console)
  1. Open the Security Lake console at https://console.aws.amazon.com/securitylake/.

    Sign in with the credentials of the delegated Security Lake administrator.

  2. In the navigation pane, under Settings, choose General.

  3. Choose Update role permissions.

  4. In the Service access section, do one of the following:

    • Create and use a new service role— You can use the AmazonSecurityLakeMetaStoreManagerV2 role created by Security Lake.

    • Use an existing service role— You can choose an existing service role from the Service role name list.

  5. Choose Apply.

API

To update role permissions (API)

To update permissions programmatically, use the UpdateDataLake operation of the Security Lake API. To update permissions using the AWS CLI, run the update-data-lake command.

To update your role permissions, you must attach the AmazonSecurityLakeMetastoreManager policy to the role.

Deleting the AmazonSecurityLakeMetaStoreManager role

Important

After you update your role permissions to AmazonSecurityLakeMetaStoreManagerV2, confirm that the data lake works correctly before you remove the old AmazonSecurityLakeMetaStoreManager role. It is recommended to wait at-least 4 hours before removing the role.

If you decide to remove the role, you must first delete the AmazonSecurityLakeMetaStoreManager role from AWS Lake Formation.

Follow these steps to remove the AmazonSecurityLakeMetaStoreManager role from the Lake Formation console.

  1. Sign in to the AWS Management Console, and open the Lake Formation console at https://console.aws.amazon.com/lakeformation/.

  2. In the Lake Formation console, from the navigation pane, choose Administrative roles and tasks.

  3. Remove AmazonSecurityLakeMetaStoreManager from each Region.

Removing an AWS service as a source

Choose your access method, and follow these steps to remove a natively-supported AWS service as a Security Lake source. You can remove a source for one or more Regions. When you remove the source, Security Lake stops collecting data from that source in the specified Regions and accounts, and subscribers can no longer consume new data from the source. However, subscribers can still consume data that Security Lake collected from the source before removal. You can only use these instructions to remove a natively-supported AWS service as a source. For information about removing a custom source, see Collecting data from custom sources.

Console
  1. Open the Security Lake console at https://console.aws.amazon.com/securitylake/.

  2. Choose Sources from the navigation pane.

  3. Select a source, and choose Disable.

  4. Select a Region or Regions in which you want to stop collecting data from this source. Security Lake will stop collecting data from the source from all accounts in the selected Regions.

API

To remove an AWS service as a source programmatically, use the DeleteAwsLogSource operation of the Security Lake API. If you're using the AWS Command Line Interface (AWS CLI), run the delete-aws-log-source command. The sourceName and regions parameters are required. Optionally, you can limit the scope of the removal to specific accounts or a specific sourceVersion.

Important

When you don't provide a parameter in your command, Security Lake assumes that the missing parameter refers to the entire set. For example, if you don't provide the accounts parameter , the command applies to the entire set of accounts in your organization.

The following example removes VPC Flow Logs as a source in the designated accounts and Regions.

$ aws securitylake delete-aws-log-source \ --sources sourceName=VPC_FLOW,accounts='["123456789012", "111122223333"]',regions='["us-east-1", "us-east-2"]',sourceVersion="1.0"

The following example removes Route 53 as a source in the designated account and Regions.

$ aws securitylake delete-aws-log-source \ --sources sourceName=ROUTE53,accounts='["123456789012"]',regions='["us-east-1", "us-east-2"]',sourceVersion="1.0"

The preceding examples are formatted for Linux, macOS, or Unix, and they use the backslash (\) line-continuation character to improve readability.

Getting the status of source collection

Choose your access method, and follow the steps to get a snapshot of the accounts and sources for which log collection is enabled in the current Region.

Console
To get the status of log collection in the current Region
  1. Open the Security Lake console at https://console.aws.amazon.com/securitylake/.

  2. On the navigation pane, choose Accounts.

  3. Hover the cursor over the number in the Sources column to see which logs are enabled for the selected account.

API

To get the status of log collection in the current Region, use the GetDataLakeSources operation of the Security Lake API. If you're using the AWS CLI, run the get-data-lake-sources command. For the accounts parameter, you can specify one or more AWS account IDs as a list. If your request succeeds, Security Lake returns a snapshot for those accounts in the current Region, including which AWS sources Security Lake is collecting data from and the status of each source. If you don't include the accounts parameter, the response includes the status of log collection for all accounts in which Security Lake is configured in the current Region.

For example, the following AWS CLI command retrieves log collection status for the specified accounts in the current Region. This example is formatted for Linux, macOS, or Unix, and it uses the backslash (\) line-continuation character to improve readability.

$ aws securitylake get-data-lake-sources \ --accounts "123456789012" "111122223333"