Amazon EKS cluster
For Amazon Elastic Kubernetes Service (Amazon EKS) clusters, Centralized Logging with OpenSearch generates an all-in-one configuration file for you to deploy the log agent (Fluent Bit 1.9) as a DaemonSet or Sidecar. After the log agent is deployed, the solution starts collecting pod logs.
The following guides you to create a log pipeline that ingests logs from an Amazon EKS cluster.
Create a log analytics pipeline (OpenSearch Engine)
Prerequisites
Make sure you have imported an Amazon OpenSearch Service domain. For more information, see Domain operations.
Follow these steps:
-
Sign in to the Centralized Logging with OpenSearch Console.
-
In the left sidebar, under Log Analytics Pipelines, choose Application Log.
-
Choose Create a pipeline.
-
Choose Amazon EKS as Log Source, choose Amazon OpenSearch Service, and choose Next.
-
Choose the AWS account in which the logs are stored.
-
Choose an EKS Cluster. If no clusters are imported yet, choose Import an EKS Cluster and follow instructions to import an EKS cluster. After that, select the newly imported EKS cluster from the dropdown list.
-
Choose Next.
You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with the Amazon EKS cluster as log source.
-
Select a log config. If you do not find the desired log config from the dropdown list, choose Create New and follow instructions in Log Config.
-
Enter a Log Path to specify the location of logs you want to collect. You can use, to separate multiple paths. Choose Next.
-
Specify Index name in lowercase.
-
In the Buffer section, choose S3 or Kinesis Data Streams. If you don't want the buffer layer, choose None. Refer to the Log Buffer for more information about choosing the appropriate buffer layer.
-
Amazon S3 buffer parameters
Parameter Default Description S3 Bucket A log bucket will be created by the solution. You can also select a bucket to store the log data. S3 Bucket Prefix AppLogs/<index-prefix>/year=%Y/month=%m/day=%d
The log agent appends the prefix when delivering the log files to the S3 bucket. Buffer size 50 MiB
The maximum size of log data cached at the log agent side before delivering to Amazon S3. For more information, see Data Delivery Frequency. Buffer interval 60 seconds
The maximum interval of the log agent to deliver logs to Amazon S3. For more information, see Data Delivery Frequency. Compression for data records Gzip
The log agent compresses records before delivering them to the S3 bucket. -
Kinesis Data Streams buffer parameters
Parameter Default Description Shard number <Requires input>
The number of shards of the Kinesis Data Streams. Each shard can have up to 1,000 records per second and total data write rate of 1MB per second. Enable auto scaling No
This solution monitors the utilization of Kinesis Data Streams every 5 minutes, and scales in/out the number of shards automatically. The solution will scale in/out for a maximum of 8 times within 24 hours. Maximum Shard number <Requires input>
Required if auto scaling is enabled. The maximum number of shards. Important
You may observe duplicate logs in OpenSearch if a threshold error occurs in Kinesis Data Streams (KDS). This is because the Fluent Bit log agent uploads logs in chunk
(contains multiple records), and will retry the chunk if upload failed. Each KDS shard can support up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second. Estimate your log volume and choose an appropriate shard number.
-
-
Choose Next.
-
In the Specify OpenSearch domain section, select an imported domain for Amazon OpenSearch Service domain.
-
In the Log Lifecycle section, enter the number of days to manage the Amazon OpenSearch Service index lifecycle. The Centralized Logging with OpenSearch will create the associated Index State Management (ISM)
policy automatically for this pipeline. -
In the Select log processor section, choose the log processor.
-
When selecting Lambda as a log processor, you can configure the Lambda concurrency if needed.
-
(Optional) OSI as log processor is now supported in these Regions
. When OSI is selected, enter the minimum and maximum number of OCU. For more information, see Scaling pipelines.
-
-
Choose Next.
-
Enable Alarms if needed and select an existing SNS topic. If you choose Create a new SNS topic, please provide a name and an email address for the new SNS topic.
-
Add tags if needed.
-
Choose Create.
-
Wait for the application pipeline to turn to "Active" state.
Create a log analytics pipeline (Light Engine)
Follow these steps:
-
Sign in to the Centralized Logging with OpenSearch Console.
-
In the left sidebar, under Log Analytics Pipelines, choose Application Log.
-
Choose Create a pipeline.
-
Choose Amazon EKS as Log Source, choose Light Engine, and choose Next.
-
Choose the AWS account in which the logs are stored.
-
Choose an EKS Cluster. If no clusters are imported yet, choose Import an EKS Cluster and follow instructions to import an EKS cluster. After that, select the newly imported EKS cluster from the dropdown list.
-
Choose Next.
You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with the Amazon EKS cluster as log source.
-
Select a log config. If you do not find the desired log config from the dropdown list, choose Create New and follow instructions in Log Config.
-
Enter a Log Path to specify the location of logs you want to collect. You can use, to separate multiple paths. Choose Next.
-
In the Buffer section, configure Amazon S3 buffer parameters.
-
Amazon S3 buffer parameters
Parameter Default Description S3 Bucket A log bucket will be created by the solution. You can also select a bucket to store the log data. Buffer size 50 MiB
The maximum size of log data cached at the log agent side before delivering to Amazon S3. For more information, see Data Delivery Frequency. Buffer interval 60 seconds
The maximum interval of the log agent to deliver logs to Amazon S3. For more information, see Data Delivery Frequency. Compression for data records Gzip
The log agent compresses records before delivering them to the S3 bucket.
-
-
Choose Next.
-
In the Specify Light Engine Configuration section, if you want to ingest an associated templated Grafana dashboard, select Yes for the sample dashboard.
-
Choose an existing Grafana, or import a new one by making configurations in Grafana.
-
Select an Amazon S3 bucket to store partitioned logs and give a name to the log table. The solution provides a predefined table name, but you can modify it according to your needs.
-
Modify the log processing frequency if needed, which is set to 5 minutes by default with a minimum processing frequency of 1 minute.
-
In the Log Lifecycle section, enter the log merger time and lag archive time. The solution provides default values, which you can modify according to your needs.
-
Choose Next.
-
Enable Alarms if needed and select an exiting SNS topic. If you choose Create a new SNS topic, provide a name and an email address for the new SNS topic.
-
Add tags if needed.
-
Choose Create.
-
Wait for the application pipeline to turn to "Active" state.