Syslog - Centralized Logging with OpenSearch

Syslog

Centralized Logging with OpenSearch collects syslog logs through UDP or TCP protocol.

The following guides you to create a log pipeline that ingests logs from a syslog endpoint.

Create a log analytics pipeline (OpenSearch Engine)

Prerequisites

Make sure you have imported an Amazon OpenSearch Service domain. For more information, see Domain operations.

Follow these steps:

  1. Sign in to the Centralized Logging with OpenSearch Console.

  2. In the left sidebar, under Log Analytics Pipelines, choose Application Log.

  3. Choose Create a pipeline.

  4. Choose Syslog Endpoint as Log Source, choose Amazon OpenSearch Service, and choose Next.

  5. Select UDP or TCP with custom port number. Choose Next.

You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with syslog as log source.

  1. Select a log config. If you do not find the desired log config from the dropdown list, choose Create New and follow instructions in Log Config.

  2. Enter a Log Path to specify the location of logs you want to collect.

  3. Specify Index name in lowercase.

  4. In the Buffer section, choose S3 or Kinesis Data Streams. If you don't want the buffer layer, choose None. Refer to the Log Buffer for more information about choosing the appropriate buffer layer.

    • Amazon S3 buffer parameters

      Parameter Default Description
      S3 Bucket A log bucket will be created by the solution. You can also select a bucket to store the log data.
      S3 Bucket Prefix AppLogs/<index-prefix>/year=%Y/month=%m/day=%d The log agent appends the prefix when delivering the log files to the S3 bucket.
      Buffer size 50 MiB The maximum size of log data cached at the log agent side before delivering to Amazon S3. For more information, see Data Delivery Frequency.
      Buffer interval 60 seconds The maximum interval of the log agent to deliver logs to Amazon S3. For more information, see Data Delivery Frequency.
      Compression for data records Gzip The log agent compresses records before delivering them to the S3 bucket.
    • Kinesis Data Streams buffer parameters

      Parameter Default Description
      Shard number <Requires input> The number of shards of the Kinesis Data Streams. Each shard can have up to 1,000 records per second and total data write rate of 1MB per second.
      Enable auto scaling No This solution monitors the utilization of Kinesis Data Streams every 5 minutes, and scales in/out the number of shards automatically. The solution will scale in/out for a maximum of 8 times within 24 hours.
      Maximum Shard number <Requires input> Required if auto scaling is enabled. The maximum number of shards.
      Important

      You may observe duplicate logs in OpenSearch if a threshold error occurs in Kinesis Data Streams (KDS). This is because the Fluent Bit log agent uploads logs in chunk (contains multiple records), and will retry the chunk if upload failed. Each KDS shard can support up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second. Estimate your log volume and choose an appropriate shard number.

  5. Choose Next.

  6. In the Specify OpenSearch domain section, select an imported domain for Amazon OpenSearch Service domain.

  7. In the Log Lifecycle section, enter the number of days to manage the Amazon OpenSearch Service index lifecycle. The Centralized Logging with OpenSearch will create the associated Index State Management (ISM) policy automatically for this pipeline.

  8. In the Select log processor section, choose the log processor.

    • When selecting Lambda as a log processor, you can configure the Lambda concurrency if needed.

    • (Optional) OSI as log processor is now supported in these Regions. When OSI is selected, enter the minimum and maximum number of OCU. For more information, see Scaling pipelines.

  9. Choose Next.

  10. Enable Alarms if needed and select an existing SNS topic. If you choose Create a new SNS topic, provide a name and an email address for the new SNS topic.

  11. Add tags if needed.

  12. Choose Create.

  13. Wait for the application pipeline to turn to "Active" state.

Create a log analytics pipeline (Light Engine)

Follow these steps:

  1. Sign in to the Centralized Logging with OpenSearch Console.

  2. In the left sidebar, under Log Analytics Pipelines, choose Application Log.

  3. Choose Create a pipeline.

  4. Choose Syslog Endpoint as Log Source, choose Light Engine, and choose Next.

  5. Select UDP or TCP with custom port number. Choose Next.

You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with syslog as log source.

  1. Select a log config. If you do not find the desired log config from the dropdown list, choose Create New and follow instructions in Log Config.

  2. Enter a Log Path to specify the location of logs you want to collect.

  3. In the Buffer section, configure Amazon S3 buffer parameters.

    • Amazon S3 buffer parameters

      Parameter Default Description
      S3 Bucket A log bucket will be created by the solution. You can also select a bucket to store the log data.
      Buffer size 50 MiB The maximum size of log data cached at the log agent side before delivering to Amazon S3. For more information, see Data Delivery Frequency.
      Buffer interval 60 seconds The maximum interval of the log agent to deliver logs to Amazon S3. For more information, see Data Delivery Frequency.
      Compression for data records Gzip The log agent compresses records before delivering them to the S3 bucket.
  4. Choose Next.

  5. In the Specify Light Engine Configuration section, if you want to ingest an associated templated Grafana dashboard, select Yes for the sample dashboard.

  6. Choose an existing Grafana, or import a new one by making configurations in Grafana.

  7. Select an Amazon S3 bucket to store partitioned logs and give a name to the log table. The solution provides a predefined table name, but you can modify it according to your needs.

  8. Modify the log processing frequency if needed, which is set to 5 minutes by default with a minimum processing frequency of 1 minute.

  9. In the Log Lifecycle section, enter the log merger time and lag archive time. The solution provides default values, which you can modify according to your needs.

  10. Choose Next.

  11. Enable Alarms if needed and select an exiting SNS topic. If you choose Create a new SNS topic, provide a name and an email address for the new SNS topic.

  12. Add tags if needed.

  13. Choose Create.

  14. Wait for the application pipeline to turn to "Active" state.