You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.

Class: Aws::CloudWatchLogs::Client

Inherits:
Seahorse::Client::Base show all
Defined in:
(unknown)

Overview

An API client for Amazon CloudWatch Logs. To construct a client, you need to configure a :region and :credentials.

cloudwatchlogs = Aws::CloudWatchLogs::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

See #initialize for a full list of supported configuration options.

Region

You can configure a default region in the following locations:

  • ENV['AWS_REGION']
  • Aws.config[:region]

Go here for a list of supported regions.

Credentials

Default credentials are loaded automatically from the following locations:

  • ENV['AWS_ACCESS_KEY_ID'] and ENV['AWS_SECRET_ACCESS_KEY']
  • Aws.config[:credentials]
  • The shared credentials ini file at ~/.aws/credentials (more information)
  • From an instance profile when running on EC2

You can also construct a credentials object from one of the following classes:

Alternatively, you configure credentials with :access_key_id and :secret_access_key:

# load credentials from disk
creds = YAML.load(File.read('/path/to/secrets'))

Aws::CloudWatchLogs::Client.new(
  access_key_id: creds['access_key_id'],
  secret_access_key: creds['secret_access_key']
)

Always load your credentials from outside your application. Avoid configuring credentials statically and never commit them to source control.

Instance Attribute Summary

Attributes inherited from Seahorse::Client::Base

#config, #handlers

Constructor collapse

API Operations collapse

Instance Method Summary collapse

Methods inherited from Seahorse::Client::Base

add_plugin, api, #build_request, clear_plugins, define, new, #operation, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Constructor Details

#initialize(options = {}) ⇒ Aws::CloudWatchLogs::Client

Constructs an API client.

Options Hash (options):

  • :access_key_id (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :active_endpoint_cache (Boolean)

    When set to true, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to false. See Plugins::EndpointDiscovery for more details.

  • :convert_params (Boolean) — default: true

    When true, an attempt is made to coerce request parameters into the required types. See Plugins::ParamConverter for more details.

  • :credentials (required, Credentials)

    Your AWS credentials. The following locations will be searched in order for credentials:

    • :access_key_id, :secret_access_key, and :session_token options
    • ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']
    • HOME/.aws/credentials shared credentials file
    • EC2 instance profile credentials See Plugins::RequestSigner for more details.
  • :disable_host_prefix_injection (Boolean)

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available. See Plugins::EndpointPattern for more details.

  • :endpoint (String)

    A default endpoint is constructed from the :region. See Plugins::RegionalEndpoint for more details.

  • :endpoint_cache_max_entries (Integer)

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000. See Plugins::EndpointDiscovery for more details.

  • :endpoint_cache_max_threads (Integer)

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10. See Plugins::EndpointDiscovery for more details.

  • :endpoint_cache_poll_interval (Integer)

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec. See Plugins::EndpointDiscovery for more details.

  • :endpoint_discovery (Boolean)

    When set to true, endpoint discovery will be enabled for operations when available. Defaults to false. See Plugins::EndpointDiscovery for more details.

  • :http_continue_timeout (Float) — default: 1

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_idle_timeout (Integer) — default: 5

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_open_timeout (Integer) — default: 15

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_proxy (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_read_timeout (Integer) — default: 60

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :http_wire_trace (Boolean) — default: false

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the logger at. See Plugins::Logging for more details.

  • :log_formatter (Logging::LogFormatter)

    The log formatter. Defaults to Seahorse::Client::Logging::Formatter.default. See Plugins::Logging for more details.

  • :logger (Logger) — default: nil

    The Logger instance to send log messages to. If this option is not set, logging will be disabled. See Plugins::Logging for more details.

  • :profile (String)

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used. See Plugins::RequestSigner for more details.

  • :raise_response_errors (Boolean) — default: true

    When true, response errors are raised. See Seahorse::Client::Plugins::RaiseResponseErrors for more details.

  • :region (required, String)

    The AWS region to connect to. The region is used to construct the client endpoint. Defaults to ENV['AWS_REGION']. Also checks AMAZON_REGION and AWS_DEFAULT_REGION. See Plugins::RegionalEndpoint for more details.

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors and auth errors from expired credentials. See Plugins::RetryErrors for more details.

  • :secret_access_key (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :session_token (String)

    Used to set credentials statically. See Plugins::RequestSigner for more details.

  • :simple_json (Boolean) — default: false

    Disables request parameter conversion, validation, and formatting. Also disable response data type conversions. This option is useful when you want to ensure the highest level of performance by avoiding overhead of walking request parameters and response data structures.

    When :simple_json is enabled, the request parameters hash must be formatted exactly as the DynamoDB API expects. See Plugins::Protocols::JsonRpc for more details.

  • :ssl_ca_bundle (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_ca_directory (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_ca_store (String)

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :ssl_verify_peer (Boolean) — default: true

    See Seahorse::Client::Plugins::NetHttp for more details.

  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling Aws::ClientStubs#stub_responses. See Aws::ClientStubs for more information.

    Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled. See Plugins::StubResponses for more details.

  • :validate_params (Boolean) — default: true

    When true, request parameters are validated before sending the request. See Plugins::ParamValidator for more details.

Instance Method Details

#associate_kms_key(options = {}) ⇒ Struct

Associates the specified AWS Key Management Service (AWS KMS) customer master key (CMK) with the specified log group.

Associating an AWS KMS CMK with a log group overrides any existing associations between the log group and a CMK. After a CMK is associated with a log group, all newly ingested data for the log group is encrypted using the CMK. This association is stored as long as the data encrypted with the CMK is still within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt this data whenever it is requested.

CloudWatch Logs supports only symmetric CMKs. Do not use an associate an asymmetric CMK with your log group. For more information, see Using Symmetric and Asymmetric Keys.

It can take up to 5 minutes for this operation to take effect.

If you attempt to associate a CMK with a log group but the CMK does not exist or the CMK is disabled, you receive an InvalidParameterException error.

Examples:

Request syntax with placeholder values


resp = client.associate_kms_key({
  log_group_name: "LogGroupName", # required
  kms_key_id: "KmsKeyId", # required
})

Options Hash (options):

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#cancel_export_task(options = {}) ⇒ Struct

Cancels the specified export task.

The task must be in the PENDING or RUNNING state.

Examples:

Request syntax with placeholder values


resp = client.cancel_export_task({
  task_id: "ExportTaskId", # required
})

Options Hash (options):

  • :task_id (required, String)

    The ID of the export task.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#create_export_task(options = {}) ⇒ Types::CreateExportTaskResponse

Creates an export task, which allows you to efficiently export data from a log group to an Amazon S3 bucket. When you perform a CreateExportTask operation, you must use credentials that have permission to write to the S3 bucket that you specify as the destination.

This is an asynchronous call. If all the required information is provided, this operation initiates an export task and responds with the ID of the task. After the task has started, you can use DescribeExportTasks to get the status of the export task. Each account can only have one active (RUNNING or PENDING) export task at a time. To cancel an export task, use CancelExportTask.

You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate out log data for each export task, you can specify a prefix to be used as the Amazon S3 key prefix for all exported objects.

Exporting to S3 buckets that are encrypted with AES-256 is supported. Exporting to S3 buckets encrypted with SSE-KMS is not supported.

Examples:

Request syntax with placeholder values


resp = client.create_export_task({
  task_name: "ExportTaskName",
  log_group_name: "LogGroupName", # required
  log_stream_name_prefix: "LogStreamName",
  from: 1, # required
  to: 1, # required
  destination: "ExportDestinationBucket", # required
  destination_prefix: "ExportDestinationPrefix",
})

Response structure


resp.task_id #=> String

Options Hash (options):

  • :task_name (String)

    The name of the export task.

  • :log_group_name (required, String)

    The name of the log group.

  • :log_stream_name_prefix (String)

    Export only log streams that match the provided prefix. If you don\'t specify a value, no prefix filter is applied.

  • :from (required, Integer)

    The start time of the range for the request, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a timestamp earlier than this time are not exported.

  • :to (required, Integer)

    The end time of the range for the request, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a timestamp later than this time are not exported.

  • :destination (required, String)

    The name of S3 bucket for the exported log data. The bucket must be in the same AWS region.

  • :destination_prefix (String)

    The prefix used as the start of the key for every object exported. If you don\'t specify a value, the default is exportedlogs.

Returns:

See Also:

#create_log_group(options = {}) ⇒ Struct

Creates a log group with the specified name. You can create up to 20,000 log groups per account.

You must use the following guidelines when naming a log group:

  • Log group names must be unique within a region for an AWS account.

  • Log group names can be between 1 and 512 characters long.

  • Log group names consist of the following characters: a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen), '/' (forward slash), '.' (period), and '#' (number sign)

When you create a log group, by default the log events in the log group never expire. To set a retention policy so that events expire and are deleted after a specified time, use PutRetentionPolicy.

If you associate a AWS Key Management Service (AWS KMS) customer master key (CMK) with the log group, ingested data is encrypted using the CMK. This association is stored as long as the data encrypted with the CMK is still within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt this data whenever it is requested.

If you attempt to associate a CMK with the log group but the CMK does not exist or the CMK is disabled, you receive an InvalidParameterException error.

CloudWatch Logs supports only symmetric CMKs. Do not associate an asymmetric CMK with your log group. For more information, see Using Symmetric and Asymmetric Keys.

Examples:

Request syntax with placeholder values


resp = client.create_log_group({
  log_group_name: "LogGroupName", # required
  kms_key_id: "KmsKeyId",
  tags: {
    "TagKey" => "TagValue",
  },
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :kms_key_id (String)

    The Amazon Resource Name (ARN) of the CMK to use when encrypting log data. For more information, see Amazon Resource Names - AWS Key Management Service (AWS KMS).

  • :tags (Hash<String,String>)

    The key-value pairs to use for the tags.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#create_log_stream(options = {}) ⇒ Struct

Creates a log stream for the specified log group. A log stream is a sequence of log events that originate from a single source, such as an application instance or a resource that is being monitored.

There is no limit on the number of log streams that you can create for a log group. There is a limit of 50 TPS on CreateLogStream operations, after which transactions are throttled.

You must use the following guidelines when naming a log stream:

  • Log stream names must be unique within the log group.

  • Log stream names can be between 1 and 512 characters long.

  • The ':' (colon) and '*' (asterisk) characters are not allowed.

Examples:

Request syntax with placeholder values


resp = client.create_log_stream({
  log_group_name: "LogGroupName", # required
  log_stream_name: "LogStreamName", # required
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :log_stream_name (required, String)

    The name of the log stream.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_destination(options = {}) ⇒ Struct

Deletes the specified destination, and eventually disables all the subscription filters that publish to it. This operation does not delete the physical resource encapsulated by the destination.

Examples:

Request syntax with placeholder values


resp = client.delete_destination({
  destination_name: "DestinationName", # required
})

Options Hash (options):

  • :destination_name (required, String)

    The name of the destination.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_log_group(options = {}) ⇒ Struct

Deletes the specified log group and permanently deletes all the archived log events associated with the log group.

Examples:

Request syntax with placeholder values


resp = client.delete_log_group({
  log_group_name: "LogGroupName", # required
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_log_stream(options = {}) ⇒ Struct

Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream.

Examples:

Request syntax with placeholder values


resp = client.delete_log_stream({
  log_group_name: "LogGroupName", # required
  log_stream_name: "LogStreamName", # required
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :log_stream_name (required, String)

    The name of the log stream.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_metric_filter(options = {}) ⇒ Struct

Deletes the specified metric filter.

Examples:

Request syntax with placeholder values


resp = client.delete_metric_filter({
  log_group_name: "LogGroupName", # required
  filter_name: "FilterName", # required
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :filter_name (required, String)

    The name of the metric filter.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_query_definition(options = {}) ⇒ Types::DeleteQueryDefinitionResponse

Deletes a saved CloudWatch Logs Insights query definition. A query definition contains details about a saved CloudWatch Logs Insights query.

Each DeleteQueryDefinition operation can delete one query definition.

You must have the logs:DeleteQueryDefinition permission to be able to perform this operation.

Examples:

Request syntax with placeholder values


resp = client.delete_query_definition({
  query_definition_id: "QueryId", # required
})

Response structure


resp.success #=> true/false

Options Hash (options):

  • :query_definition_id (required, String)

    The ID of the query definition that you want to delete. You can use DescribeQueryDefinitions to retrieve the IDs of your saved query definitions.

Returns:

See Also:

#delete_resource_policy(options = {}) ⇒ Struct

Deletes a resource policy from this account. This revokes the access of the identities in that policy to put log events to this account.

Examples:

Request syntax with placeholder values


resp = client.delete_resource_policy({
  policy_name: "PolicyName",
})

Options Hash (options):

  • :policy_name (String)

    The name of the policy to be revoked. This parameter is required.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_retention_policy(options = {}) ⇒ Struct

Deletes the specified retention policy.

Log events do not expire if they belong to log groups without a retention policy.

Examples:

Request syntax with placeholder values


resp = client.delete_retention_policy({
  log_group_name: "LogGroupName", # required
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#delete_subscription_filter(options = {}) ⇒ Struct

Deletes the specified subscription filter.

Examples:

Request syntax with placeholder values


resp = client.delete_subscription_filter({
  log_group_name: "LogGroupName", # required
  filter_name: "FilterName", # required
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :filter_name (required, String)

    The name of the subscription filter.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#describe_destinations(options = {}) ⇒ Types::DescribeDestinationsResponse

Lists all your destinations. The results are ASCII-sorted by destination name.

Examples:

Request syntax with placeholder values


resp = client.describe_destinations({
  destination_name_prefix: "DestinationName",
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.destinations #=> Array
resp.destinations[0].destination_name #=> String
resp.destinations[0].target_arn #=> String
resp.destinations[0].role_arn #=> String
resp.destinations[0].access_policy #=> String
resp.destinations[0].arn #=> String
resp.destinations[0].creation_time #=> Integer
resp.next_token #=> String

Options Hash (options):

  • :destination_name_prefix (String)

    The prefix to match. If you don\'t specify a value, no prefix filter is applied.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of items returned. If you don\'t specify a value, the default is up to 50 items.

Returns:

See Also:

#describe_export_tasks(options = {}) ⇒ Types::DescribeExportTasksResponse

Lists the specified export tasks. You can list all your export tasks or filter the results based on task ID or task status.

Examples:

Request syntax with placeholder values


resp = client.describe_export_tasks({
  task_id: "ExportTaskId",
  status_code: "CANCELLED", # accepts CANCELLED, COMPLETED, FAILED, PENDING, PENDING_CANCEL, RUNNING
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.export_tasks #=> Array
resp.export_tasks[0].task_id #=> String
resp.export_tasks[0].task_name #=> String
resp.export_tasks[0].log_group_name #=> String
resp.export_tasks[0].from #=> Integer
resp.export_tasks[0].to #=> Integer
resp.export_tasks[0].destination #=> String
resp.export_tasks[0].destination_prefix #=> String
resp.export_tasks[0].status.code #=> String, one of "CANCELLED", "COMPLETED", "FAILED", "PENDING", "PENDING_CANCEL", "RUNNING"
resp.export_tasks[0].status.message #=> String
resp.export_tasks[0].execution_info.creation_time #=> Integer
resp.export_tasks[0].execution_info.completion_time #=> Integer
resp.next_token #=> String

Options Hash (options):

  • :task_id (String)

    The ID of the export task. Specifying a task ID filters the results to zero or one export tasks.

  • :status_code (String)

    The status code of the export task. Specifying a status code filters the results to zero or more export tasks.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of items returned. If you don\'t specify a value, the default is up to 50 items.

Returns:

See Also:

#describe_log_groups(options = {}) ⇒ Types::DescribeLogGroupsResponse

Lists the specified log groups. You can list all your log groups or filter the results by prefix. The results are ASCII-sorted by log group name.

Examples:

Request syntax with placeholder values


resp = client.describe_log_groups({
  log_group_name_prefix: "LogGroupName",
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.log_groups #=> Array
resp.log_groups[0].log_group_name #=> String
resp.log_groups[0].creation_time #=> Integer
resp.log_groups[0].retention_in_days #=> Integer
resp.log_groups[0].metric_filter_count #=> Integer
resp.log_groups[0].arn #=> String
resp.log_groups[0].stored_bytes #=> Integer
resp.log_groups[0].kms_key_id #=> String
resp.next_token #=> String

Options Hash (options):

  • :log_group_name_prefix (String)

    The prefix to match.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of items returned. If you don\'t specify a value, the default is up to 50 items.

Returns:

See Also:

#describe_log_streams(options = {}) ⇒ Types::DescribeLogStreamsResponse

Lists the log streams for the specified log group. You can list all the log streams or filter the results by prefix. You can also control how the results are ordered.

This operation has a limit of five transactions per second, after which transactions are throttled.

Examples:

Request syntax with placeholder values


resp = client.describe_log_streams({
  log_group_name: "LogGroupName", # required
  log_stream_name_prefix: "LogStreamName",
  order_by: "LogStreamName", # accepts LogStreamName, LastEventTime
  descending: false,
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.log_streams #=> Array
resp.log_streams[0].log_stream_name #=> String
resp.log_streams[0].creation_time #=> Integer
resp.log_streams[0].first_event_timestamp #=> Integer
resp.log_streams[0].last_event_timestamp #=> Integer
resp.log_streams[0].last_ingestion_time #=> Integer
resp.log_streams[0].upload_sequence_token #=> String
resp.log_streams[0].arn #=> String
resp.log_streams[0].stored_bytes #=> Integer
resp.next_token #=> String

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :log_stream_name_prefix (String)

    The prefix to match.

    If orderBy is LastEventTime, you cannot specify this parameter.

  • :order_by (String)

    If the value is LogStreamName, the results are ordered by log stream name. If the value is LastEventTime, the results are ordered by the event time. The default value is LogStreamName.

    If you order the results by event time, you cannot specify the logStreamNamePrefix parameter.

    lastEventTimeStamp represents the time of the most recent log event in the log stream in CloudWatch Logs. This number is expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. lastEventTimeStamp updates on an eventual consistency basis. It typically updates in less than an hour from ingestion, but in rare situations might take longer.

  • :descending (Boolean)

    If the value is true, results are returned in descending order. If the value is to false, results are returned in ascending order. The default value is false.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of items returned. If you don\'t specify a value, the default is up to 50 items.

Returns:

See Also:

#describe_metric_filters(options = {}) ⇒ Types::DescribeMetricFiltersResponse

Lists the specified metric filters. You can list all of the metric filters or filter the results by log name, prefix, metric name, or metric namespace. The results are ASCII-sorted by filter name.

Examples:

Request syntax with placeholder values


resp = client.describe_metric_filters({
  log_group_name: "LogGroupName",
  filter_name_prefix: "FilterName",
  next_token: "NextToken",
  limit: 1,
  metric_name: "MetricName",
  metric_namespace: "MetricNamespace",
})

Response structure


resp.metric_filters #=> Array
resp.metric_filters[0].filter_name #=> String
resp.metric_filters[0].filter_pattern #=> String
resp.metric_filters[0].metric_transformations #=> Array
resp.metric_filters[0].metric_transformations[0].metric_name #=> String
resp.metric_filters[0].metric_transformations[0].metric_namespace #=> String
resp.metric_filters[0].metric_transformations[0].metric_value #=> String
resp.metric_filters[0].metric_transformations[0].default_value #=> Float
resp.metric_filters[0].creation_time #=> Integer
resp.metric_filters[0].log_group_name #=> String
resp.next_token #=> String

Options Hash (options):

  • :log_group_name (String)

    The name of the log group.

  • :filter_name_prefix (String)

    The prefix to match. CloudWatch Logs uses the value you set here only if you also include the logGroupName parameter in your request.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of items returned. If you don\'t specify a value, the default is up to 50 items.

  • :metric_name (String)

    Filters results to include only those with the specified metric name. If you include this parameter in your request, you must also include the metricNamespace parameter.

  • :metric_namespace (String)

    Filters results to include only those in the specified namespace. If you include this parameter in your request, you must also include the metricName parameter.

Returns:

See Also:

#describe_queries(options = {}) ⇒ Types::DescribeQueriesResponse

Returns a list of CloudWatch Logs Insights queries that are scheduled, executing, or have been executed recently in this account. You can request all queries or limit it to queries of a specific log group or queries with a certain status.

Examples:

Request syntax with placeholder values


resp = client.describe_queries({
  log_group_name: "LogGroupName",
  status: "Scheduled", # accepts Scheduled, Running, Complete, Failed, Cancelled
  max_results: 1,
  next_token: "NextToken",
})

Response structure


resp.queries #=> Array
resp.queries[0].query_id #=> String
resp.queries[0].query_string #=> String
resp.queries[0].status #=> String, one of "Scheduled", "Running", "Complete", "Failed", "Cancelled"
resp.queries[0].create_time #=> Integer
resp.queries[0].log_group_name #=> String
resp.next_token #=> String

Options Hash (options):

  • :log_group_name (String)

    Limits the returned queries to only those for the specified log group.

  • :status (String)

    Limits the returned queries to only those that have the specified status. Valid values are Cancelled, Complete, Failed, Running, and Scheduled.

  • :max_results (Integer)

    Limits the number of returned queries to the specified number.

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

Returns:

See Also:

#describe_query_definitions(options = {}) ⇒ Types::DescribeQueryDefinitionsResponse

This operation returns a paginated list of your saved CloudWatch Logs Insights query definitions.

You can use the queryDefinitionNamePrefix parameter to limit the results to only the query definitions that have names that start with a certain string.

Examples:

Request syntax with placeholder values


resp = client.describe_query_definitions({
  query_definition_name_prefix: "QueryDefinitionName",
  max_results: 1,
  next_token: "NextToken",
})

Response structure


resp.query_definitions #=> Array
resp.query_definitions[0].query_definition_id #=> String
resp.query_definitions[0].name #=> String
resp.query_definitions[0].query_string #=> String
resp.query_definitions[0].last_modified #=> Integer
resp.query_definitions[0].log_group_names #=> Array
resp.query_definitions[0].log_group_names[0] #=> String
resp.next_token #=> String

Options Hash (options):

  • :query_definition_name_prefix (String)

    Use this parameter to filter your results to only the query definitions that have names that start with the prefix you specify.

  • :max_results (Integer)

    Limits the number of returned query definitions to the specified number.

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

Returns:

See Also:

#describe_resource_policies(options = {}) ⇒ Types::DescribeResourcePoliciesResponse

Lists the resource policies in this account.

Examples:

Request syntax with placeholder values


resp = client.describe_resource_policies({
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.resource_policies #=> Array
resp.resource_policies[0].policy_name #=> String
resp.resource_policies[0].policy_document #=> String
resp.resource_policies[0].last_updated_time #=> Integer
resp.next_token #=> String

Options Hash (options):

  • :next_token (String)

    The token for the next set of items to return. The token expires after 24 hours.

  • :limit (Integer)

    The maximum number of resource policies to be displayed with one call of this API.

Returns:

See Also:

#describe_subscription_filters(options = {}) ⇒ Types::DescribeSubscriptionFiltersResponse

Lists the subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name.

Examples:

Request syntax with placeholder values


resp = client.describe_subscription_filters({
  log_group_name: "LogGroupName", # required
  filter_name_prefix: "FilterName",
  next_token: "NextToken",
  limit: 1,
})

Response structure


resp.subscription_filters #=> Array
resp.subscription_filters[0].filter_name #=> String
resp.subscription_filters[0].log_group_name #=> String
resp.subscription_filters[0].filter_pattern #=> String
resp.subscription_filters[0].destination_arn #=> String
resp.subscription_filters[0].role_arn #=> String
resp.subscription_filters[0].distribution #=> String, one of "Random", "ByLogStream"
resp.subscription_filters[0].creation_time #=> Integer
resp.next_token #=> String

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :filter_name_prefix (String)

    The prefix to match. If you don\'t specify a value, no prefix filter is applied.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of items returned. If you don\'t specify a value, the default is up to 50 items.

Returns:

See Also:

#disassociate_kms_key(options = {}) ⇒ Struct

Disassociates the associated AWS Key Management Service (AWS KMS) customer master key (CMK) from the specified log group.

After the AWS KMS CMK is disassociated from the log group, AWS CloudWatch Logs stops encrypting newly ingested data for the log group. All previously ingested data remains encrypted, and AWS CloudWatch Logs requires permissions for the CMK whenever the encrypted data is requested.

Note that it can take up to 5 minutes for this operation to take effect.

Examples:

Request syntax with placeholder values


resp = client.disassociate_kms_key({
  log_group_name: "LogGroupName", # required
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#filter_log_events(options = {}) ⇒ Types::FilterLogEventsResponse

Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream.

By default, this operation returns as many log events as can fit in 1 MB (up to 10,000 log events) or all the events found within the time range that you specify. If the results include a token, then there are more log events available, and you can get additional results by specifying the token in a subsequent call. This operation can return empty results while there are more log events available through the token.

The returned log events are sorted by event timestamp, the timestamp when the event was ingested by CloudWatch Logs, and the ID of the PutLogEvents request.

Examples:

Request syntax with placeholder values


resp = client.filter_log_events({
  log_group_name: "LogGroupName", # required
  log_stream_names: ["LogStreamName"],
  log_stream_name_prefix: "LogStreamName",
  start_time: 1,
  end_time: 1,
  filter_pattern: "FilterPattern",
  next_token: "NextToken",
  limit: 1,
  interleaved: false,
})

Response structure


resp.events #=> Array
resp.events[0].log_stream_name #=> String
resp.events[0].timestamp #=> Integer
resp.events[0].message #=> String
resp.events[0].ingestion_time #=> Integer
resp.events[0].event_id #=> String
resp.searched_log_streams #=> Array
resp.searched_log_streams[0].log_stream_name #=> String
resp.searched_log_streams[0].searched_completely #=> true/false
resp.next_token #=> String

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group to search.

  • :log_stream_names (Array<String>)

    Filters the results to only logs from the log streams in this list.

    If you specify a value for both logStreamNamePrefix and logStreamNames, the action returns an InvalidParameterException error.

  • :log_stream_name_prefix (String)

    Filters the results to include only events from log streams that have names starting with this prefix.

    If you specify a value for both logStreamNamePrefix and logStreamNames, but the value for logStreamNamePrefix does not match any log stream names specified in logStreamNames, the action returns an InvalidParameterException error.

  • :start_time (Integer)

    The start of the time range, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a timestamp before this time are not returned.

    If you omit startTime and endTime the most recent log events are retrieved, to up 1 MB or 10,000 log events.

  • :end_time (Integer)

    The end of the time range, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a timestamp later than this time are not returned.

  • :filter_pattern (String)

    The filter pattern to use. For more information, see Filter and Pattern Syntax.

    If not provided, all the events are matched.

  • :next_token (String)

    The token for the next set of events to return. (You received this token from a previous call.)

  • :limit (Integer)

    The maximum number of events to return. The default is 10,000 events.

  • :interleaved (Boolean)

    If the value is true, the operation makes a best effort to provide responses that contain events from multiple log streams within the log group, interleaved in a single response. If the value is false, all the matched log events in the first log stream are searched first, then those in the next log stream, and so on. The default is false.

    Important: Starting on June 17, 2019, this parameter is ignored and the value is assumed to be true. The response from this operation always interleaves events from multiple log streams within a log group.

Returns:

See Also:

#get_log_events(options = {}) ⇒ Types::GetLogEventsResponse

Lists log events from the specified log stream. You can list all of the log events or filter using a time range.

By default, this operation returns as many log events as can fit in a response size of 1MB (up to 10,000 log events). You can get additional log events by specifying one of the tokens in a subsequent call. This operation can return empty results while there are more log events available through the token.

Examples:

Request syntax with placeholder values


resp = client.get_log_events({
  log_group_name: "LogGroupName", # required
  log_stream_name: "LogStreamName", # required
  start_time: 1,
  end_time: 1,
  next_token: "NextToken",
  limit: 1,
  start_from_head: false,
})

Response structure


resp.events #=> Array
resp.events[0].timestamp #=> Integer
resp.events[0].message #=> String
resp.events[0].ingestion_time #=> Integer
resp.next_forward_token #=> String
resp.next_backward_token #=> String

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :log_stream_name (required, String)

    The name of the log stream.

  • :start_time (Integer)

    The start of the time range, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a timestamp equal to this time or later than this time are included. Events with a timestamp earlier than this time are not included.

  • :end_time (Integer)

    The end of the time range, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a timestamp equal to or later than this time are not included.

  • :next_token (String)

    The token for the next set of items to return. (You received this token from a previous call.)

    Using this token works only when you specify true for startFromHead.

  • :limit (Integer)

    The maximum number of log events returned. If you don\'t specify a value, the maximum is as many log events as can fit in a response size of 1 MB, up to 10,000 log events.

  • :start_from_head (Boolean)

    If the value is true, the earliest log events are returned first. If the value is false, the latest log events are returned first. The default value is false.

    If you are using nextToken in this operation, you must specify true for startFromHead.

Returns:

See Also:

#get_log_group_fields(options = {}) ⇒ Types::GetLogGroupFieldsResponse

Returns a list of the fields that are included in log events in the specified log group, along with the percentage of log events that contain each field. The search is limited to a time period that you specify.

In the results, fields that start with @ are fields generated by CloudWatch Logs. For example, @timestamp is the timestamp of each log event. For more information about the fields that are generated by CloudWatch logs, see Supported Logs and Discovered Fields.

The response results are sorted by the frequency percentage, starting with the highest percentage.

Examples:

Request syntax with placeholder values


resp = client.get_log_group_fields({
  log_group_name: "LogGroupName", # required
  time: 1,
})

Response structure


resp.log_group_fields #=> Array
resp.log_group_fields[0].name #=> String
resp.log_group_fields[0].percent #=> Integer

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group to search.

  • :time (Integer)

    The time to set as the center of the query. If you specify time, the 8 minutes before and 8 minutes after this time are searched. If you omit time, the past 15 minutes are queried.

    The time value is specified as epoch time, the number of seconds since January 1, 1970, 00:00:00 UTC.

Returns:

See Also:

#get_log_record(options = {}) ⇒ Types::GetLogRecordResponse

Retrieves all of the fields and values of a single log event. All fields are retrieved, even if the original query that produced the logRecordPointer retrieved only a subset of fields. Fields are returned as field name/field value pairs.

The full unparsed log event is returned within @message.

Examples:

Request syntax with placeholder values


resp = client.get_log_record({
  log_record_pointer: "LogRecordPointer", # required
})

Response structure


resp.log_record #=> Hash
resp.log_record["Field"] #=> String

Options Hash (options):

  • :log_record_pointer (required, String)

    The pointer corresponding to the log event record you want to retrieve. You get this from the response of a GetQueryResults operation. In that response, the value of the @ptr field for a log event is the value to use as logRecordPointer to retrieve that complete log event record.

Returns:

See Also:

#get_query_results(options = {}) ⇒ Types::GetQueryResultsResponse

Returns the results from the specified query.

Only the fields requested in the query are returned, along with a @ptr field, which is the identifier for the log record. You can use the value of @ptr in a GetLogRecord operation to get the full log record.

GetQueryResults does not start a query execution. To run a query, use StartQuery.

If the value of the Status field in the output is Running, this operation returns only partial results. If you see a value of Scheduled or Running for the status, you can retry the operation later to see the final results.

Examples:

Request syntax with placeholder values


resp = client.get_query_results({
  query_id: "QueryId", # required
})

Response structure


resp.results #=> Array
resp.results[0] #=> Array
resp.results[0][0].field #=> String
resp.results[0][0].value #=> String
resp.statistics.records_matched #=> Float
resp.statistics.records_scanned #=> Float
resp.statistics.bytes_scanned #=> Float
resp.status #=> String, one of "Scheduled", "Running", "Complete", "Failed", "Cancelled"

Options Hash (options):

  • :query_id (required, String)

    The ID number of the query.

Returns:

See Also:

#list_tags_log_group(options = {}) ⇒ Types::ListTagsLogGroupResponse

Lists the tags for the specified log group.

Examples:

Request syntax with placeholder values


resp = client.list_tags_log_group({
  log_group_name: "LogGroupName", # required
})

Response structure


resp.tags #=> Hash
resp.tags["TagKey"] #=> String

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

Returns:

See Also:

#put_destination(options = {}) ⇒ Types::PutDestinationResponse

Creates or updates a destination. This operation is used only to create destinations for cross-account subscriptions.

A destination encapsulates a physical resource (such as an Amazon Kinesis stream) and enables you to subscribe to a real-time stream of log events for a different account, ingested using PutLogEvents.

Through an access policy, a destination controls what is written to it. By default, PutDestination does not set any access policy with the destination, which means a cross-account user cannot call PutSubscriptionFilter against this destination. To enable this, the destination owner must call PutDestinationPolicy after PutDestination.

To perform a PutDestination operation, you must also have the iam:PassRole permission.

Examples:

Request syntax with placeholder values


resp = client.put_destination({
  destination_name: "DestinationName", # required
  target_arn: "TargetArn", # required
  role_arn: "RoleArn", # required
})

Response structure


resp.destination.destination_name #=> String
resp.destination.target_arn #=> String
resp.destination.role_arn #=> String
resp.destination.access_policy #=> String
resp.destination.arn #=> String
resp.destination.creation_time #=> Integer

Options Hash (options):

  • :destination_name (required, String)

    A name for the destination.

  • :target_arn (required, String)

    The ARN of an Amazon Kinesis stream to which to deliver matching log events.

  • :role_arn (required, String)

    The ARN of an IAM role that grants CloudWatch Logs permissions to call the Amazon Kinesis PutRecord operation on the destination stream.

Returns:

See Also:

#put_destination_policy(options = {}) ⇒ Struct

Creates or updates an access policy associated with an existing destination. An access policy is an IAM policy document that is used to authorize claims to register a subscription filter against a given destination.

Examples:

Request syntax with placeholder values


resp = client.put_destination_policy({
  destination_name: "DestinationName", # required
  access_policy: "AccessPolicy", # required
})

Options Hash (options):

  • :destination_name (required, String)

    A name for an existing destination.

  • :access_policy (required, String)

    An IAM policy document that authorizes cross-account users to deliver their log events to the associated destination. This can be up to 5120 bytes.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#put_log_events(options = {}) ⇒ Types::PutLogEventsResponse

Uploads a batch of log events to the specified log stream.

You must include the sequence token obtained from the response of the previous call. An upload in a newly created log stream does not require a sequence token. You can also get the sequence token in the expectedSequenceToken field from InvalidSequenceTokenException. If you call PutLogEvents twice within a narrow time period using the same value for sequenceToken, both calls might be successful or one might be rejected.

The batch of events must satisfy the following constraints:

  • The maximum batch size is 1,048,576 bytes. This size is calculated as the sum of all event messages in UTF-8, plus 26 bytes for each log event.

  • None of the log events in the batch can be more than 2 hours in the future.

  • None of the log events in the batch can be older than 14 days or older than the retention period of the log group.

  • The log events in the batch must be in chronological order by their timestamp. The timestamp is the time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. (In AWS Tools for PowerShell and the AWS SDK for .NET, the timestamp is specified in .NET format: yyyy-mm-ddThh:mm:ss. For example, 2017-09-15T13:45:30.)

  • A batch of log events in a single request cannot span more than 24 hours. Otherwise, the operation fails.

  • The maximum number of log events in a batch is 10,000.

  • There is a quota of 5 requests per second per log stream. Additional requests are throttled. This quota can't be changed.

If a call to PutLogEvents returns "UnrecognizedClientException" the most likely cause is an invalid AWS access key ID or secret key.

Examples:

Request syntax with placeholder values


resp = client.put_log_events({
  log_group_name: "LogGroupName", # required
  log_stream_name: "LogStreamName", # required
  log_events: [ # required
    {
      timestamp: 1, # required
      message: "EventMessage", # required
    },
  ],
  sequence_token: "SequenceToken",
})

Response structure


resp.next_sequence_token #=> String
resp.rejected_log_events_info.too_new_log_event_start_index #=> Integer
resp.rejected_log_events_info.too_old_log_event_end_index #=> Integer
resp.rejected_log_events_info.expired_log_event_end_index #=> Integer

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :log_stream_name (required, String)

    The name of the log stream.

  • :log_events (required, Array<Types::InputLogEvent>)

    The log events.

  • :sequence_token (String)

    The sequence token obtained from the response of the previous PutLogEvents call. An upload in a newly created log stream does not require a sequence token. You can also get the sequence token using DescribeLogStreams. If you call PutLogEvents twice within a narrow time period using the same value for sequenceToken, both calls might be successful or one might be rejected.

Returns:

See Also:

#put_metric_filter(options = {}) ⇒ Struct

Creates or updates a metric filter and associates it with the specified log group. Metric filters allow you to configure rules to extract metric data from log events ingested through PutLogEvents.

The maximum number of metric filters that can be associated with a log group is 100.

Examples:

Request syntax with placeholder values


resp = client.put_metric_filter({
  log_group_name: "LogGroupName", # required
  filter_name: "FilterName", # required
  filter_pattern: "FilterPattern", # required
  metric_transformations: [ # required
    {
      metric_name: "MetricName", # required
      metric_namespace: "MetricNamespace", # required
      metric_value: "MetricValue", # required
      default_value: 1.0,
    },
  ],
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :filter_name (required, String)

    A name for the metric filter.

  • :filter_pattern (required, String)

    A filter pattern for extracting metric data out of ingested log events.

  • :metric_transformations (required, Array<Types::MetricTransformation>)

    A collection of information that defines how metric data gets emitted.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#put_query_definition(options = {}) ⇒ Types::PutQueryDefinitionResponse

Creates or updates a query definition for CloudWatch Logs Insights. For more information, see Analyzing Log Data with CloudWatch Logs Insights.

To update a query definition, specify its queryDefinitionId in your request. The values of name, queryString, and logGroupNames are changed to the values that you specify in your update operation. No current values are retained from the current query definition. For example, if you update a current query definition that includes log groups, and you don't specify the logGroupNames parameter in your update operation, the query definition changes to contain no log groups.

You must have the logs:PutQueryDefinition permission to be able to perform this operation.

Examples:

Request syntax with placeholder values


resp = client.put_query_definition({
  name: "QueryDefinitionName", # required
  query_definition_id: "QueryId",
  log_group_names: ["LogGroupName"],
  query_string: "QueryDefinitionString", # required
})

Response structure


resp.query_definition_id #=> String

Options Hash (options):

  • :name (required, String)

    A name for the query definition. If you are saving a lot of query definitions, we recommend that you name them so that you can easily find the ones you want by using the first part of the name as a filter in the queryDefinitionNamePrefix parameter of DescribeQueryDefinitions.

  • :query_definition_id (String)

    If you are updating a query definition, use this parameter to specify the ID of the query definition that you want to update. You can use DescribeQueryDefinitions to retrieve the IDs of your saved query definitions.

    If you are creating a query definition, do not specify this parameter. CloudWatch generates a unique ID for the new query definition and include it in the response to this operation.

  • :log_group_names (Array<String>)

    Use this parameter to include specific log groups as part of your query definition.

    If you are updating a query definition and you omit this parameter, then the updated definition will contain no log groups.

  • :query_string (required, String)

    The query string to use for this definition. For more information, see CloudWatch Logs Insights Query Syntax.

Returns:

See Also:

#put_resource_policy(options = {}) ⇒ Types::PutResourcePolicyResponse

Creates or updates a resource policy allowing other AWS services to put log events to this account, such as Amazon Route 53. An account can have up to 10 resource policies per AWS Region.

Examples:

Request syntax with placeholder values


resp = client.put_resource_policy({
  policy_name: "PolicyName",
  policy_document: "PolicyDocument",
})

Response structure


resp.resource_policy.policy_name #=> String
resp.resource_policy.policy_document #=> String
resp.resource_policy.last_updated_time #=> Integer

Options Hash (options):

  • :policy_name (String)

    Name of the new policy. This parameter is required.

  • :policy_document (String)

    Details of the new policy, including the identity of the principal that is enabled to put logs to this account. This is formatted as a JSON string. This parameter is required.

    The following example creates a resource policy enabling the Route 53 service to put DNS query logs in to the specified log group. Replace "logArn" with the ARN of your CloudWatch Logs resource, such as a log group or log stream.

    { "Version": "2012-10-17", "Statement": [ { "Sid": "Route53LogsToCloudWatchLogs", "Effect": "Allow", "Principal": { "Service": [ "route53.amazonaws.com" ] }, "Action":"logs:PutLogEvents", "Resource": "logArn" } ] }

Returns:

See Also:

#put_retention_policy(options = {}) ⇒ Struct

Sets the retention of the specified log group. A retention policy allows you to configure the number of days for which to retain log events in the specified log group.

Examples:

Request syntax with placeholder values


resp = client.put_retention_policy({
  log_group_name: "LogGroupName", # required
  retention_in_days: 1, # required
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :retention_in_days (required, Integer)

    The number of days to retain the log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.

    If you omit retentionInDays in a PutRetentionPolicy operation, the events in the log group are always retained and never expire.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#put_subscription_filter(options = {}) ⇒ Struct

Creates or updates a subscription filter and associates it with the specified log group. Subscription filters allow you to subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the gzip format.

The following destinations are supported for subscription filters:

  • An Amazon Kinesis stream belonging to the same account as the subscription filter, for same-account delivery.

  • A logical destination that belongs to a different account, for cross-account delivery.

  • An Amazon Kinesis Firehose delivery stream that belongs to the same account as the subscription filter, for same-account delivery.

  • An AWS Lambda function that belongs to the same account as the subscription filter, for same-account delivery.

There can only be one subscription filter associated with a log group. If you are updating an existing filter, you must specify the correct name in filterName. Otherwise, the call fails because you cannot associate a second filter with a log group.

To perform a PutSubscriptionFilter operation, you must also have the iam:PassRole permission.

Examples:

Request syntax with placeholder values


resp = client.put_subscription_filter({
  log_group_name: "LogGroupName", # required
  filter_name: "FilterName", # required
  filter_pattern: "FilterPattern", # required
  destination_arn: "DestinationArn", # required
  role_arn: "RoleArn",
  distribution: "Random", # accepts Random, ByLogStream
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :filter_name (required, String)

    A name for the subscription filter. If you are updating an existing filter, you must specify the correct name in filterName. Otherwise, the call fails because you cannot associate a second filter with a log group. To find the name of the filter currently associated with a log group, use DescribeSubscriptionFilters.

  • :filter_pattern (required, String)

    A filter pattern for subscribing to a filtered stream of log events.

  • :destination_arn (required, String)

    The ARN of the destination to deliver matching log events to. Currently, the supported destinations are:

    • An Amazon Kinesis stream belonging to the same account as the subscription filter, for same-account delivery.

    • A logical destination (specified using an ARN) belonging to a different account, for cross-account delivery.

    • An Amazon Kinesis Firehose delivery stream belonging to the same account as the subscription filter, for same-account delivery.

    • An AWS Lambda function belonging to the same account as the subscription filter, for same-account delivery.

  • :role_arn (String)

    The ARN of an IAM role that grants CloudWatch Logs permissions to deliver ingested log events to the destination stream. You don\'t need to provide the ARN when you are working with a logical destination for cross-account delivery.

  • :distribution (String)

    The method used to distribute log data to the destination. By default, log data is grouped by log stream, but the grouping can be set to random for a more even distribution. This property is only applicable when the destination is an Amazon Kinesis stream.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#start_query(options = {}) ⇒ Types::StartQueryResponse

Schedules a query of a log group using CloudWatch Logs Insights. You specify the log group and time range to query and the query string to use.

For more information, see CloudWatch Logs Insights Query Syntax.

Queries time out after 15 minutes of execution. If your queries are timing out, reduce the time range being searched or partition your query into a number of queries.

Examples:

Request syntax with placeholder values


resp = client.start_query({
  log_group_name: "LogGroupName",
  log_group_names: ["LogGroupName"],
  start_time: 1, # required
  end_time: 1, # required
  query_string: "QueryString", # required
  limit: 1,
})

Response structure


resp.query_id #=> String

Options Hash (options):

  • :log_group_name (String)

    The log group on which to perform the query.

    A StartQuery operation must include a logGroupNames or a logGroupName parameter, but not both.

  • :log_group_names (Array<String>)

    The list of log groups to be queried. You can include up to 20 log groups.

    A StartQuery operation must include a logGroupNames or a logGroupName parameter, but not both.

  • :start_time (required, Integer)

    The beginning of the time range to query. The range is inclusive, so the specified start time is included in the query. Specified as epoch time, the number of seconds since January 1, 1970, 00:00:00 UTC.

  • :end_time (required, Integer)

    The end of the time range to query. The range is inclusive, so the specified end time is included in the query. Specified as epoch time, the number of seconds since January 1, 1970, 00:00:00 UTC.

  • :query_string (required, String)

    The query string to use. For more information, see CloudWatch Logs Insights Query Syntax.

  • :limit (Integer)

    The maximum number of log events to return in the query. If the query string uses the fields command, only the specified fields and their values are returned. The default is 1000.

Returns:

See Also:

#stop_query(options = {}) ⇒ Types::StopQueryResponse

Stops a CloudWatch Logs Insights query that is in progress. If the query has already ended, the operation returns an error indicating that the specified query is not running.

Examples:

Request syntax with placeholder values


resp = client.stop_query({
  query_id: "QueryId", # required
})

Response structure


resp.success #=> true/false

Options Hash (options):

  • :query_id (required, String)

    The ID number of the query to stop. To find this ID number, use DescribeQueries.

Returns:

See Also:

#tag_log_group(options = {}) ⇒ Struct

Adds or updates the specified tags for the specified log group.

To list the tags for a log group, use ListTagsLogGroup. To remove tags, use UntagLogGroup.

For more information about tags, see Tag Log Groups in Amazon CloudWatch Logs in the Amazon CloudWatch Logs User Guide.

Examples:

Request syntax with placeholder values


resp = client.tag_log_group({
  log_group_name: "LogGroupName", # required
  tags: { # required
    "TagKey" => "TagValue",
  },
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :tags (required, Hash<String,String>)

    The key-value pairs to use for the tags.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#test_metric_filter(options = {}) ⇒ Types::TestMetricFilterResponse

Tests the filter pattern of a metric filter against a sample of log event messages. You can use this operation to validate the correctness of a metric filter pattern.

Examples:

Request syntax with placeholder values


resp = client.test_metric_filter({
  filter_pattern: "FilterPattern", # required
  log_event_messages: ["EventMessage"], # required
})

Response structure


resp.matches #=> Array
resp.matches[0].event_number #=> Integer
resp.matches[0].event_message #=> String
resp.matches[0].extracted_values #=> Hash
resp.matches[0].extracted_values["Token"] #=> String

Options Hash (options):

  • :filter_pattern (required, String)

    A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message.

  • :log_event_messages (required, Array<String>)

    The log event messages to test.

Returns:

See Also:

#untag_log_group(options = {}) ⇒ Struct

Removes the specified tags from the specified log group.

To list the tags for a log group, use ListTagsLogGroup. To add tags, use TagLogGroup.

Examples:

Request syntax with placeholder values


resp = client.untag_log_group({
  log_group_name: "LogGroupName", # required
  tags: ["TagKey"], # required
})

Options Hash (options):

  • :log_group_name (required, String)

    The name of the log group.

  • :tags (required, Array<String>)

    The tag keys. The corresponding tags are removed from the log group.

Returns:

  • (Struct)

    Returns an empty response.

See Also:

#wait_until(waiter_name, params = {}) {|waiter| ... } ⇒ Boolean

Waiters polls an API operation until a resource enters a desired state.

Basic Usage

Waiters will poll until they are succesful, they fail by entering a terminal state, or until a maximum number of attempts are made.

# polls in a loop, sleeping between attempts client.waiter_until(waiter_name, params)

Configuration

You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. You configure waiters by passing a block to #wait_until:

# poll for ~25 seconds
client.wait_until(...) do |w|
  w.max_attempts = 5
  w.delay = 5
end

Callbacks

You can be notified before each polling attempt and before each delay. If you throw :success or :failure from these callbacks, it will terminate the waiter.

started_at = Time.now
client.wait_until(...) do |w|

  # disable max attempts
  w.max_attempts = nil

  # poll for 1 hour, instead of a number of attempts
  w.before_wait do |attempts, response|
    throw :failure if Time.now - started_at > 3600
  end

end

Handling Errors

When a waiter is successful, it returns true. When a waiter fails, it raises an error. All errors raised extend from Waiters::Errors::WaiterFailed.

begin
  client.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
  # resource did not enter the desired state in time
end

Parameters:

  • waiter_name (Symbol)

    The name of the waiter. See #waiter_names for a full list of supported waiters.

  • params (Hash) (defaults to: {})

    Additional request parameters. See the #waiter_names for a list of supported waiters and what request they call. The called request determines the list of accepted parameters.

Yield Parameters:

Returns:

  • (Boolean)

    Returns true if the waiter was successful.

Raises:

  • (Errors::FailureStateError)

    Raised when the waiter terminates because the waiter has entered a state that it will not transition out of, preventing success.

  • (Errors::TooManyAttemptsError)

    Raised when the configured maximum number of attempts have been made, and the waiter is not yet successful.

  • (Errors::UnexpectedError)

    Raised when an error is encounted while polling for a resource that is not expected.

  • (Errors::NoSuchWaiterError)

    Raised when you request to wait for an unknown state.

#waiter_namesArray<Symbol>

Returns the list of supported waiters. The following table lists the supported waiters and the client method they call:

Waiter NameClient MethodDefault Delay:Default Max Attempts:

Returns:

  • (Array<Symbol>)

    the list of supported waiters.