Collecting data from custom sources - Amazon Security Lake

Collecting data from custom sources

Amazon Security Lake can collect logs and events from third-party custom sources. For each custom source, Security Lake handles the following:

  • Provides a unique prefix for the source in your Amazon S3 bucket.

  • Creates a role in AWS Identity and Access Management (IAM) that permits a custom source to write data to the data lake. The permissions boundary for this role is set by an AWS managed policy called AmazonSecurityLakePermissionsBoundary.

  • Creates an AWS Lake Formation table to organize objects that the source writes to Security Lake.

  • Sets up an AWS Glue crawler to partition your source data. The crawler populates the AWS Glue Data Catalog with the table. It also automatically discovers new source data and extracts schema definitions.

To add a custom source to Security Lake, it must meet the following requirements:

  1. Destination – The custom source must be able to write data to Security Lake as a set of S3 objects underneath the prefix assigned to the source. For sources that contain multiple categories of data, you should deliver each unique Open Cybersecurity Schema Framework (OCSF) event class as a separate source. Security Lake creates an IAM role that permits the custom source to write to the specified location in your S3 bucket.

    Note

    Use the OCSF Validation tool to verify if the custom source is compatible with OCSF Schema 1.1.

  2. Format – Each S3 object that's collected from the custom source should be formatted as an Apache Parquet file.

  3. Schema – The same OCSF event class should apply to each record within a Parquet-formatted object.

Best practices for ingesting custom sources

To facilitate efficient data processing and querying, we recommend following these best practices when adding a custom source to Security Lake:

Partitioning

Objects should be partitioned by source location, AWS Region, AWS account, and date. The partition data path is formatted as bucket-name/source-location/region=region/accountId=accountID/eventDay=YYYYMMDD.

A sample partition is aws-security-data-lake-us-west-2-lake-uid/source-location/region=us-west-2/accountId=123456789012/eventDay=20230428/.

  • bucket-name – The name of the Amazon S3 bucket in which Security Lake stores your custom source data.

  • source-location – Prefix for the custom source in your S3 bucket. Security Lake stores all S3 objects for a given source under this prefix, and the prefix is unique to the given source.

  • region – AWS Region to which the data is written.

  • accountId – AWS account ID that the records in the source partition pertain to.

  • eventDay – Date on which the event occurred, formatted as an eight character string (YYYYMMDD).

Object size and rate

Objects written to Security Lake should buffer records for 5 minutes. If the buffer period includes too much data to be queried efficiently, custom sources can write multiple records in the 5-minute window as long as the average size of those files remains under 256 MB. Custom sources with low throughput can write smaller objects every 5 minutes to maintain a 5-minute ingest latency, and can buffer records for longer periods.

Parquet settings

Security Lake supports versions 1.x and 2.x of Parquet. Data page size should be limited to 1 MB (uncompressed). Row group size should be no larger than 256 MB (compressed). For compression within the Parquet object, zstandard is preferred.

Sorting

Within each Parquet-formatted object, records should be ordered by time to reduce the cost of querying data.

Prerequisites to adding a custom source

When adding a custom source, Security Lake creates an IAM role that permits the source to write data to the correct location in the data lake. The name of the role follows the format AmazonSecurityLake-Provider-{name of the custom source}-{region}, where region is the AWS Region in which you're adding the custom source. Security Lake attaches a policy to the role that permits access to the data lake. If you've encrypted the data lake with a customer managed AWS KMS key, Security Lake also attaches a policy with kms:Decrypt and kms:GenerateDataKey permissions to the role. The permissions boundary for this role is set by an AWS managed policy called AmazonSecurityLakePermissionsBoundary.

Verify permissions

Before adding a custom source, verify that you have the permissions to perform the following actions.

To verify your permissions, use IAM to review the IAM policies that are attached to your IAM identity. Then, compare the information in those policies to the following list of actions that you must be allowed to perform to add a custom source.

  • glue:CreateCrawler

  • glue:CreateDatabase

  • glue:CreateTable

  • glue:StopCrawlerSchedule

  • iam:GetRole

  • iam:PutRolePolicy

  • iam:DeleteRolePolicy

  • iam:PassRole

  • lakeformation:RegisterResource

  • lakeformation:GrantPermissions

  • s3:ListBucket

  • s3:PutObject

These actions allow you to collect logs and events from a custom source, send them to the correct AWS Glue database and table, and store them in Amazon S3.

If you use an AWS KMS key for server-side encryption of your data lake, you also need permission for kms:CreateGrant, kms:DescribeKey, and kms:GenerateDataKey.

Important

If you plan to use the Security Lake console to add a custom source, you can skip the next step and proceed to Adding a custom source. The Security Lake console offers a streamlined process for getting started, and creates all necessary IAM roles or uses existing roles on your behalf.

If you plan to use Security Lake API or AWS CLI to add a custom source, continue with the next step to create an IAM role to permit write access to Security Lake bucket location.

Create IAM role to permit write access to Security Lake bucket location (API and AWS CLI-only step)

If you're using Security Lake API or AWS CLI to add a custom source, add this IAM role to grant AWS Glue permission to crawl your custom source data and identify partitions in the data. These partitions are necessary to organize your data and create and update tables in the Data Catalog.

After creating this IAM role, you will need the Amazon Resource Name (ARN) of the role in order to add a custom source.

You must attach the arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole AWS managed policy.

To grant the necessary permissions, you must also create and embed the following inline policy in your role to permit AWS Glue crawler to read data files from the custom source and create/update the tables in AWS Glue Data Catalog.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "S3WriteRead", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::{{bucketName}}/*" ] } ] }

Attach the following trust policy to permit an AWS account by using which, it can assume the role based on the external ID:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "glue.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }

If the S3 bucket in the Region where you're adding the custom source is encrypted with a customer-managed AWS KMS key, you must also attach the following policy to the role and to your KMS key policy:

{ "Effect": "Allow", "Action": [ "kms:GenerateDataKey" "kms:Decrypt" ], "Condition": { "StringLike": { "kms:EncryptionContext:aws:s3:arn": [ "arn:aws:s3:::{{name of S3 bucket created by Security Lake}" ] } }, "Resource": [ "{{ARN of customer managed key}}" ] }

Adding a custom source

After creating the IAM role to invoke the AWS Glue crawler, follow these steps to add a custom source in Security Lake.

Console
  1. Open the Security Lake console at https://console.aws.amazon.com/securitylake/.

  2. By using the AWS Region selector in the upper-right corner of the page, select the Region where you want to create the custom source.

  3. Choose Custom sources in the navigation pane, and then choose Create custom source.

  4. In the Custom source details section, enter a globally unique name for your custom source. Then, select an OCSF event class that describes the type of data that the custom source will send to Security Lake.

  5. For AWS account with permission to write data, enter the AWS account ID and External ID of the custom source that will write logs and events to the data lake.

  6. For Service Access, create and use a new service role or use an existing service role that gives Security Lake permission to invoke AWS Glue.

  7. Choose Create.

API

To add a custom source programmatically, use the CreateCustomLogSource operation of the Security Lake API. Use the operation in the AWS Region where you want to create the custom source. If you're using the AWS Command Line Interface (AWS CLI), run the create-custom-log-source command.

In your request, use the supported parameters to specify configuration settings for the custom source:

  • sourceName – Specify a name for the source. The name must be a Regionally unique value.

  • eventClasses – Specify one or more OCSF event classes to describe the type of data that the source will send to Security Lake. For a list of OCSF event classes supported as source in Security Lake, see Open Cybersecurity Schema Framework (OCSF).

  • sourceVersion – Optionally, specify a value to limit log collection to a specific version of custom source data.

  • crawlerConfiguration – Specify the Amazon Resource Name (ARN) of the IAM role that you created to invoke the AWS Glue crawler. For the detailed steps to create an IAM role, see Prerequisites to adding a custom source

  • providerIdentity – Specify the AWS identity and external ID that the source will use to write logs and events to the data lake.

The following example adds a custom source as a log source in the designated log provider account in designated Regions. This example is formatted for Linux, macOS, or Unix, and it uses the backslash (\) line-continuation character to improve readability.

$ aws securitylake create-custom-log-source \ --source-name EXAMPLE_CUSTOM_SOURCE \ --event-classes '["DNS_ACTIVITY", "NETWORK_ACTIVITY"]' \ --configuration crawlerConfiguration={"roleArn=arn:aws:iam::XXX:role/service-role/RoleName"},providerIdentity={"externalId=ExternalId,principal=principal"} \ --region=[“ap-southeast-2”]

Keeping custom source data updated in AWS Glue

After you add a custom source in Security Lake, Security Lake creates an AWS Glue crawler. The crawler connects to your custom source, determines the data structures, and populates the AWS Glue Data Catalog with tables.

We recommend manually running the crawler to keep your custom source schema up to date and maintain query functionality in Athena and other querying services. Specifically, you should run the crawler if either of the following changes occur in your input data set for a custom source:

  • The data set has one or more new top-level columns.

  • The data set has one or more new fields in a column with a struct datatype.

For instructions on running a crawler, see Scheduling an AWS Glue crawler in the AWS Glue Developer Guide.

Security Lake can't delete or update existing crawlers in your account. If you delete a custom source, we recommend deleting the associated crawler if you plan to create a custom source with the same name in the future.

Deleting a custom source

Delete a custom source to stop sending data from the source to Security Lake.

Console
  1. Open the Security Lake console at https://console.aws.amazon.com/securitylake/.

  2. By using the AWS Region selector in the upper-right corner of the page, select the Region that you want to remove the custom source from.

  3. In the navigation pane, choose Custom sources.

  4. Select the custom source that you want to remove.

  5. Choose Deregister custom source and then choose Delete to confirm the action.

API

To delete a custom source programmatically, use the DeleteCustomLogSource operation of the Security Lake API. If you're using the AWS Command Line Interface (AWS CLI), run the delete-custom-log-source command. Use the operation in the AWS Region where you want to delete the custom source.

In your request, use the sourceName parameter to specify the name of the custom source to delete. Or specify the name of the custom source and use the sourceVersion parameter to limit the scope of the deletion to only a specific version of data from the custom source.

The following example deletes a custom log source from Security Lake.

This example is formatted for Linux, macOS, or Unix, and it uses the backslash (\) line-continuation character to improve readability.

$ aws securitylake delete-custom-log-source \ --source-name EXAMPLE_CUSTOM_SOURCE