AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Implementation for accessing CloudWatchLogs
You can use Amazon CloudWatch Logs to monitor, store, and access your log files from EC2 instances, CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Logs using the CloudWatch console. Alternatively, you can use CloudWatch Logs commands in the Amazon Web Services CLI, CloudWatch Logs API, or CloudWatch Logs SDK.You can use CloudWatch Logs to:
Monitor logs from EC2 instances in real time: You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs. Then, it can send you a notification whenever the rate of errors exceeds a threshold that you specify. CloudWatch Logs uses your log data for monitoring so no code changes are required. For example, you can monitor application logs for specific literal terms (such as "NullReferenceException"). You can also count the number of occurrences of a literal term at a particular position in log data (such as "404" status codes in an Apache access log). When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify.
Monitor CloudTrail logged events: You can create alarms in CloudWatch and receive notifications of particular API activity as captured by CloudTrail. You can use the notification to perform troubleshooting.
Archive log data: You can use CloudWatch Logs to store your log data in highly durable storage. You can change the log retention setting so that any log events earlier than this setting are automatically deleted. The CloudWatch Logs agent helps to quickly send both rotated and non-rotated log data off of a host and into the log service. You can then access the raw log data when you need it.
Namespace: Amazon.CloudWatchLogs
Assembly: AWSSDK.CloudWatchLogs.dll
Version: 3.x.y.z
public class AmazonCloudWatchLogsClient : AmazonServiceClient IAmazonCloudWatchLogs, IAmazonService, IDisposable
The AmazonCloudWatchLogsClient type exposes the following members
Name | Description | |
---|---|---|
AmazonCloudWatchLogsClient() |
Constructs AmazonCloudWatchLogsClient with the credentials loaded from the application's default configuration, and if unsuccessful from the Instance Profile service on an EC2 instance. Example App.config with credentials set. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="AWSProfileName" value="AWS Default"/> </appSettings> </configuration> |
|
AmazonCloudWatchLogsClient(RegionEndpoint) |
Constructs AmazonCloudWatchLogsClient with the credentials loaded from the application's default configuration, and if unsuccessful from the Instance Profile service on an EC2 instance. Example App.config with credentials set. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="AWSProfileName" value="AWS Default"/> </appSettings> </configuration> |
|
AmazonCloudWatchLogsClient(AmazonCloudWatchLogsConfig) |
Constructs AmazonCloudWatchLogsClient with the credentials loaded from the application's default configuration, and if unsuccessful from the Instance Profile service on an EC2 instance. Example App.config with credentials set. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="AWSProfileName" value="AWS Default"/> </appSettings> </configuration> |
|
AmazonCloudWatchLogsClient(AWSCredentials) |
Constructs AmazonCloudWatchLogsClient with AWS Credentials |
|
AmazonCloudWatchLogsClient(AWSCredentials, RegionEndpoint) |
Constructs AmazonCloudWatchLogsClient with AWS Credentials |
|
AmazonCloudWatchLogsClient(AWSCredentials, AmazonCloudWatchLogsConfig) |
Constructs AmazonCloudWatchLogsClient with AWS Credentials and an AmazonCloudWatchLogsClient Configuration object. |
|
AmazonCloudWatchLogsClient(string, string) |
Constructs AmazonCloudWatchLogsClient with AWS Access Key ID and AWS Secret Key |
|
AmazonCloudWatchLogsClient(string, string, RegionEndpoint) |
Constructs AmazonCloudWatchLogsClient with AWS Access Key ID and AWS Secret Key |
|
AmazonCloudWatchLogsClient(string, string, AmazonCloudWatchLogsConfig) |
Constructs AmazonCloudWatchLogsClient with AWS Access Key ID, AWS Secret Key and an AmazonCloudWatchLogsClient Configuration object. |
|
AmazonCloudWatchLogsClient(string, string, string) |
Constructs AmazonCloudWatchLogsClient with AWS Access Key ID and AWS Secret Key |
|
AmazonCloudWatchLogsClient(string, string, string, RegionEndpoint) |
Constructs AmazonCloudWatchLogsClient with AWS Access Key ID and AWS Secret Key |
|
AmazonCloudWatchLogsClient(string, string, string, AmazonCloudWatchLogsConfig) |
Constructs AmazonCloudWatchLogsClient with AWS Access Key ID, AWS Secret Key and an AmazonCloudWatchLogsClient Configuration object. |
Name | Type | Description | |
---|---|---|---|
Config | Amazon.Runtime.IClientConfig | Inherited from Amazon.Runtime.AmazonServiceClient. | |
Paginators | Amazon.CloudWatchLogs.Model.ICloudWatchLogsPaginatorFactory |
Paginators for the service |
Name | Description | |
---|---|---|
AssociateKmsKey(AssociateKmsKeyRequest) |
Associates the specified KMS key with either one log group in the account, or with all stored CloudWatch Logs query insights results in the account.
When you use
If you delete the key that is used to encrypt log events or log group query results, then all the associated stored log events or query results that were encrypted with that key will be unencryptable and unusable. CloudWatch Logs supports only symmetric KMS keys. Do not use an associate an asymmetric KMS key with your log group or query results. For more information, see Using Symmetric and Asymmetric Keys. It can take up to 5 minutes for this operation to take effect.
If you attempt to associate a KMS key with a log group but the KMS key does not exist
or the KMS key is disabled, you receive an |
|
AssociateKmsKeyAsync(AssociateKmsKeyRequest, CancellationToken) |
Associates the specified KMS key with either one log group in the account, or with all stored CloudWatch Logs query insights results in the account.
When you use
If you delete the key that is used to encrypt log events or log group query results, then all the associated stored log events or query results that were encrypted with that key will be unencryptable and unusable. CloudWatch Logs supports only symmetric KMS keys. Do not use an associate an asymmetric KMS key with your log group or query results. For more information, see Using Symmetric and Asymmetric Keys. It can take up to 5 minutes for this operation to take effect.
If you attempt to associate a KMS key with a log group but the KMS key does not exist
or the KMS key is disabled, you receive an |
|
CancelExportTask(CancelExportTaskRequest) |
Cancels the specified export task.
The task must be in the |
|
CancelExportTaskAsync(CancelExportTaskRequest, CancellationToken) |
Cancels the specified export task.
The task must be in the |
|
CreateDelivery(CreateDeliveryRequest) |
Creates a delivery. A delivery is a connection between a logical delivery source and a logical delivery destination that you have already created. Only some Amazon Web Services services support being configured as a delivery source using this operation. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services. A delivery destination can represent a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Firehose. To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. To update an existing delivery configuration, use UpdateDeliveryConfiguration. |
|
CreateDeliveryAsync(CreateDeliveryRequest, CancellationToken) |
Creates a delivery. A delivery is a connection between a logical delivery source and a logical delivery destination that you have already created. Only some Amazon Web Services services support being configured as a delivery source using this operation. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services. A delivery destination can represent a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Firehose. To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. To update an existing delivery configuration, use UpdateDeliveryConfiguration. |
|
CreateExportTask(CreateExportTaskRequest) |
Creates an export task so that you can efficiently export data from a log group to
an Amazon S3 bucket. When you perform a Exporting log data to S3 buckets that are encrypted by KMS is supported. Exporting log data to Amazon S3 buckets that have S3 Object Lock enabled with a retention period is also supported. Exporting to S3 buckets that are encrypted with AES-256 is supported.
This is an asynchronous call. If all the required information is provided, this operation
initiates an export task and responds with the ID of the task. After the task has
started, you can use DescribeExportTasks
to get the status of the export task. Each account can only have one active ( You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate log data for each export task, specify a prefix to be used as the Amazon S3 key prefix for all exported objects. Time-based sorting on chunks of log data inside an exported file is not guaranteed. You can sort the exported log field data by using Linux utilities. |
|
CreateExportTaskAsync(CreateExportTaskRequest, CancellationToken) |
Creates an export task so that you can efficiently export data from a log group to
an Amazon S3 bucket. When you perform a Exporting log data to S3 buckets that are encrypted by KMS is supported. Exporting log data to Amazon S3 buckets that have S3 Object Lock enabled with a retention period is also supported. Exporting to S3 buckets that are encrypted with AES-256 is supported.
This is an asynchronous call. If all the required information is provided, this operation
initiates an export task and responds with the ID of the task. After the task has
started, you can use DescribeExportTasks
to get the status of the export task. Each account can only have one active ( You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate log data for each export task, specify a prefix to be used as the Amazon S3 key prefix for all exported objects. Time-based sorting on chunks of log data inside an exported file is not guaranteed. You can sort the exported log field data by using Linux utilities. |
|
CreateLogAnomalyDetector(CreateLogAnomalyDetectorRequest) |
Creates an anomaly detector that regularly scans one or more log groups and look for patterns and anomalies in the logs. An anomaly detector can help surface issues by automatically discovering anomalies in your log event traffic. An anomaly detector uses machine learning algorithms to scan log events and find patterns. A pattern is a shared text structure that recurs among your log fields. Patterns provide a useful tool for analyzing large sets of logs because a large number of log events can often be compressed into a few patterns.
The anomaly detector uses pattern recognition to find
Fields within a pattern are called tokens. Fields that vary within a pattern,
such as a request ID or timestamp, are referred to as dynamic tokens and represented
by The following is an example of a pattern:
This pattern represents log events like Any parts of log events that are masked as sensitive data are not scanned for anomalies. For more information about masking sensitive data, see Help protect sensitive log data with masking. |
|
CreateLogAnomalyDetectorAsync(CreateLogAnomalyDetectorRequest, CancellationToken) |
Creates an anomaly detector that regularly scans one or more log groups and look for patterns and anomalies in the logs. An anomaly detector can help surface issues by automatically discovering anomalies in your log event traffic. An anomaly detector uses machine learning algorithms to scan log events and find patterns. A pattern is a shared text structure that recurs among your log fields. Patterns provide a useful tool for analyzing large sets of logs because a large number of log events can often be compressed into a few patterns.
The anomaly detector uses pattern recognition to find
Fields within a pattern are called tokens. Fields that vary within a pattern,
such as a request ID or timestamp, are referred to as dynamic tokens and represented
by The following is an example of a pattern:
This pattern represents log events like Any parts of log events that are masked as sensitive data are not scanned for anomalies. For more information about masking sensitive data, see Help protect sensitive log data with masking. |
|
CreateLogGroup(CreateLogGroupRequest) |
Creates a log group with the specified name. You can create up to 1,000,000 log groups per Region per account. You must use the following guidelines when naming a log group:
When you create a log group, by default the log events in the log group do not expire. To set a retention policy so that events expire and are deleted after a specified time, use PutRetentionPolicy. If you associate an KMS key with the log group, ingested data is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.
If you attempt to associate a KMS key with the log group but the KMS key does not
exist or the KMS key is disabled, you receive an CloudWatch Logs supports only symmetric KMS keys. Do not associate an asymmetric KMS key with your log group. For more information, see Using Symmetric and Asymmetric Keys. |
|
CreateLogGroupAsync(CreateLogGroupRequest, CancellationToken) |
Creates a log group with the specified name. You can create up to 1,000,000 log groups per Region per account. You must use the following guidelines when naming a log group:
When you create a log group, by default the log events in the log group do not expire. To set a retention policy so that events expire and are deleted after a specified time, use PutRetentionPolicy. If you associate an KMS key with the log group, ingested data is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.
If you attempt to associate a KMS key with the log group but the KMS key does not
exist or the KMS key is disabled, you receive an CloudWatch Logs supports only symmetric KMS keys. Do not associate an asymmetric KMS key with your log group. For more information, see Using Symmetric and Asymmetric Keys. |
|
CreateLogStream(CreateLogStreamRequest) |
Creates a log stream for the specified log group. A log stream is a sequence of log events that originate from a single source, such as an application instance or a resource that is being monitored.
There is no limit on the number of log streams that you can create for a log group.
There is a limit of 50 TPS on You must use the following guidelines when naming a log stream:
|
|
CreateLogStreamAsync(CreateLogStreamRequest, CancellationToken) |
Creates a log stream for the specified log group. A log stream is a sequence of log events that originate from a single source, such as an application instance or a resource that is being monitored.
There is no limit on the number of log streams that you can create for a log group.
There is a limit of 50 TPS on You must use the following guidelines when naming a log stream:
|
|
DeleteAccountPolicy(DeleteAccountPolicyRequest) |
Deletes a CloudWatch Logs account policy. This stops the account-wide policy from applying to log groups in the account. If you delete a data protection policy or subscription filter policy, any log-group level policies of those types remain in effect. To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are deleting.
If you delete a field index policy, the indexing of the log events that happened before you deleted the policy will still be used for up to 30 days to improve CloudWatch Logs Insights queries. |
|
DeleteAccountPolicyAsync(DeleteAccountPolicyRequest, CancellationToken) |
Deletes a CloudWatch Logs account policy. This stops the account-wide policy from applying to log groups in the account. If you delete a data protection policy or subscription filter policy, any log-group level policies of those types remain in effect. To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are deleting.
If you delete a field index policy, the indexing of the log events that happened before you deleted the policy will still be used for up to 30 days to improve CloudWatch Logs Insights queries. |
|
DeleteDataProtectionPolicy(DeleteDataProtectionPolicyRequest) |
Deletes the data protection policy from the specified log group. For more information about data protection policies, see PutDataProtectionPolicy. |
|
DeleteDataProtectionPolicyAsync(DeleteDataProtectionPolicyRequest, CancellationToken) |
Deletes the data protection policy from the specified log group. For more information about data protection policies, see PutDataProtectionPolicy. |
|
DeleteDelivery(DeleteDeliveryRequest) |
Deletes s delivery. A delivery is a connection between a logical delivery source and a logical delivery destination. Deleting a delivery only deletes the connection between the delivery source and delivery destination. It does not delete the delivery destination or the delivery source. |
|
DeleteDeliveryAsync(DeleteDeliveryRequest, CancellationToken) |
Deletes s delivery. A delivery is a connection between a logical delivery source and a logical delivery destination. Deleting a delivery only deletes the connection between the delivery source and delivery destination. It does not delete the delivery destination or the delivery source. |
|
DeleteDeliveryDestination(DeleteDeliveryDestinationRequest) |
Deletes a delivery destination. A delivery is a connection between a logical delivery source and a logical delivery destination.
You can't delete a delivery destination if any current deliveries are associated with
it. To find whether any deliveries are associated with this delivery destination,
use the DescribeDeliveries
operation and check the |
|
DeleteDeliveryDestinationAsync(DeleteDeliveryDestinationRequest, CancellationToken) |
Deletes a delivery destination. A delivery is a connection between a logical delivery source and a logical delivery destination.
You can't delete a delivery destination if any current deliveries are associated with
it. To find whether any deliveries are associated with this delivery destination,
use the DescribeDeliveries
operation and check the |
|
DeleteDeliveryDestinationPolicy(DeleteDeliveryDestinationPolicyRequest) |
Deletes a delivery destination policy. For more information about these policies, see PutDeliveryDestinationPolicy. |
|
DeleteDeliveryDestinationPolicyAsync(DeleteDeliveryDestinationPolicyRequest, CancellationToken) |
Deletes a delivery destination policy. For more information about these policies, see PutDeliveryDestinationPolicy. |
|
DeleteDeliverySource(DeleteDeliverySourceRequest) |
Deletes a delivery source. A delivery is a connection between a logical delivery source and a logical delivery destination.
You can't delete a delivery source if any current deliveries are associated with it.
To find whether any deliveries are associated with this delivery source, use the DescribeDeliveries
operation and check the |
|
DeleteDeliverySourceAsync(DeleteDeliverySourceRequest, CancellationToken) |
Deletes a delivery source. A delivery is a connection between a logical delivery source and a logical delivery destination.
You can't delete a delivery source if any current deliveries are associated with it.
To find whether any deliveries are associated with this delivery source, use the DescribeDeliveries
operation and check the |
|
DeleteDestination(DeleteDestinationRequest) |
Deletes the specified destination, and eventually disables all the subscription filters that publish to it. This operation does not delete the physical resource encapsulated by the destination. |
|
DeleteDestinationAsync(DeleteDestinationRequest, CancellationToken) |
Deletes the specified destination, and eventually disables all the subscription filters that publish to it. This operation does not delete the physical resource encapsulated by the destination. |
|
DeleteIndexPolicy(DeleteIndexPolicyRequest) |
Deletes a log-group level field index policy that was applied to a single log group. The indexing of the log events that happened before you delete the policy will still be used for as many as 30 days to improve CloudWatch Logs Insights queries. You can't use this operation to delete an account-level index policy. Instead, use DeletAccountPolicy. If you delete a log-group level field index policy and there is an account-level field index policy, in a few minutes the log group begins using that account-wide policy to index new incoming log events. |
|
DeleteIndexPolicyAsync(DeleteIndexPolicyRequest, CancellationToken) |
Deletes a log-group level field index policy that was applied to a single log group. The indexing of the log events that happened before you delete the policy will still be used for as many as 30 days to improve CloudWatch Logs Insights queries. You can't use this operation to delete an account-level index policy. Instead, use DeletAccountPolicy. If you delete a log-group level field index policy and there is an account-level field index policy, in a few minutes the log group begins using that account-wide policy to index new incoming log events. |
|
DeleteIntegration(DeleteIntegrationRequest) |
Deletes the integration between CloudWatch Logs and OpenSearch Service. If your integration
has active vended logs dashboards, you must specify |
|
DeleteIntegrationAsync(DeleteIntegrationRequest, CancellationToken) |
Deletes the integration between CloudWatch Logs and OpenSearch Service. If your integration
has active vended logs dashboards, you must specify |
|
DeleteLogAnomalyDetector(DeleteLogAnomalyDetectorRequest) |
Deletes the specified CloudWatch Logs anomaly detector. |
|
DeleteLogAnomalyDetectorAsync(DeleteLogAnomalyDetectorRequest, CancellationToken) |
Deletes the specified CloudWatch Logs anomaly detector. |
|
DeleteLogGroup(DeleteLogGroupRequest) |
Deletes the specified log group and permanently deletes all the archived log events associated with the log group. |
|
DeleteLogGroupAsync(DeleteLogGroupRequest, CancellationToken) |
Deletes the specified log group and permanently deletes all the archived log events associated with the log group. |
|
DeleteLogStream(DeleteLogStreamRequest) |
Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream. |
|
DeleteLogStreamAsync(DeleteLogStreamRequest, CancellationToken) |
Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream. |
|
DeleteMetricFilter(DeleteMetricFilterRequest) |
Deletes the specified metric filter. |
|
DeleteMetricFilterAsync(DeleteMetricFilterRequest, CancellationToken) |
Deletes the specified metric filter. |
|
DeleteQueryDefinition(DeleteQueryDefinitionRequest) |
Deletes a saved CloudWatch Logs Insights query definition. A query definition contains details about a saved CloudWatch Logs Insights query.
Each
You must have the |
|
DeleteQueryDefinitionAsync(DeleteQueryDefinitionRequest, CancellationToken) |
Deletes a saved CloudWatch Logs Insights query definition. A query definition contains details about a saved CloudWatch Logs Insights query.
Each
You must have the |
|
DeleteResourcePolicy(DeleteResourcePolicyRequest) |
Deletes a resource policy from this account. This revokes the access of the identities in that policy to put log events to this account. |
|
DeleteResourcePolicyAsync(DeleteResourcePolicyRequest, CancellationToken) |
Deletes a resource policy from this account. This revokes the access of the identities in that policy to put log events to this account. |
|
DeleteRetentionPolicy(DeleteRetentionPolicyRequest) |
Deletes the specified retention policy. Log events do not expire if they belong to log groups without a retention policy. |
|
DeleteRetentionPolicyAsync(DeleteRetentionPolicyRequest, CancellationToken) |
Deletes the specified retention policy. Log events do not expire if they belong to log groups without a retention policy. |
|
DeleteSubscriptionFilter(DeleteSubscriptionFilterRequest) |
Deletes the specified subscription filter. |
|
DeleteSubscriptionFilterAsync(DeleteSubscriptionFilterRequest, CancellationToken) |
Deletes the specified subscription filter. |
|
DeleteTransformer(DeleteTransformerRequest) |
Deletes the log transformer for the specified log group. As soon as you do this, the transformation of incoming log events according to that transformer stops. If this account has an account-level transformer that applies to this log group, the log group begins using that account-level transformer when this log-group level transformer is deleted. After you delete a transformer, be sure to edit any metric filters or subscription filters that relied on the transformed versions of the log events. |
|
DeleteTransformerAsync(DeleteTransformerRequest, CancellationToken) |
Deletes the log transformer for the specified log group. As soon as you do this, the transformation of incoming log events according to that transformer stops. If this account has an account-level transformer that applies to this log group, the log group begins using that account-level transformer when this log-group level transformer is deleted. After you delete a transformer, be sure to edit any metric filters or subscription filters that relied on the transformed versions of the log events. |
|
DescribeAccountPolicies(DescribeAccountPoliciesRequest) |
Returns a list of all CloudWatch Logs account policies in the account. |
|
DescribeAccountPoliciesAsync(DescribeAccountPoliciesRequest, CancellationToken) |
Returns a list of all CloudWatch Logs account policies in the account. |
|
DescribeConfigurationTemplates(DescribeConfigurationTemplatesRequest) |
Use this operation to return the valid and default values that are used when creating delivery sources, delivery destinations, and deliveries. For more information about deliveries, see CreateDelivery. |
|
DescribeConfigurationTemplatesAsync(DescribeConfigurationTemplatesRequest, CancellationToken) |
Use this operation to return the valid and default values that are used when creating delivery sources, delivery destinations, and deliveries. For more information about deliveries, see CreateDelivery. |
|
DescribeDeliveries(DescribeDeliveriesRequest) |
Retrieves a list of the deliveries that have been created in the account. A delivery is a connection between a delivery source and a delivery destination . A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services. |
|
DescribeDeliveriesAsync(DescribeDeliveriesRequest, CancellationToken) |
Retrieves a list of the deliveries that have been created in the account. A delivery is a connection between a delivery source and a delivery destination . A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services. |
|
DescribeDeliveryDestinations(DescribeDeliveryDestinationsRequest) |
Retrieves a list of the delivery destinations that have been created in the account. |
|
DescribeDeliveryDestinationsAsync(DescribeDeliveryDestinationsRequest, CancellationToken) |
Retrieves a list of the delivery destinations that have been created in the account. |
|
DescribeDeliverySources(DescribeDeliverySourcesRequest) |
Retrieves a list of the delivery sources that have been created in the account. |
|
DescribeDeliverySourcesAsync(DescribeDeliverySourcesRequest, CancellationToken) |
Retrieves a list of the delivery sources that have been created in the account. |
|
DescribeDestinations(DescribeDestinationsRequest) |
Lists all your destinations. The results are ASCII-sorted by destination name. |
|
DescribeDestinationsAsync(DescribeDestinationsRequest, CancellationToken) |
Lists all your destinations. The results are ASCII-sorted by destination name. |
|
DescribeExportTasks(DescribeExportTasksRequest) |
Lists the specified export tasks. You can list all your export tasks or filter the results based on task ID or task status. |
|
DescribeExportTasksAsync(DescribeExportTasksRequest, CancellationToken) |
Lists the specified export tasks. You can list all your export tasks or filter the results based on task ID or task status. |
|
DescribeFieldIndexes(DescribeFieldIndexesRequest) |
Returns a list of field indexes listed in the field index policies of one or more log groups. For more information about field index policies, see PutIndexPolicy. |
|
DescribeFieldIndexesAsync(DescribeFieldIndexesRequest, CancellationToken) |
Returns a list of field indexes listed in the field index policies of one or more log groups. For more information about field index policies, see PutIndexPolicy. |
|
DescribeIndexPolicies(DescribeIndexPoliciesRequest) |
Returns the field index policies of one or more log groups. For more information about field index policies, see PutIndexPolicy. If a specified log group has a log-group level index policy, that policy is returned by this operation. If a specified log group doesn't have a log-group level index policy, but an account-wide index policy applies to it, that account-wide policy is returned by this operation. To find information about only account-level policies, use DescribeAccountPolicies instead. |
|
DescribeIndexPoliciesAsync(DescribeIndexPoliciesRequest, CancellationToken) |
Returns the field index policies of one or more log groups. For more information about field index policies, see PutIndexPolicy. If a specified log group has a log-group level index policy, that policy is returned by this operation. If a specified log group doesn't have a log-group level index policy, but an account-wide index policy applies to it, that account-wide policy is returned by this operation. To find information about only account-level policies, use DescribeAccountPolicies instead. |
|
DescribeLogGroups() |
Lists the specified log groups. You can list all your log groups or filter the results by prefix. The results are ASCII-sorted by log group name.
CloudWatch Logs doesn't support IAM policies that control access to the If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability. |
|
DescribeLogGroups(DescribeLogGroupsRequest) |
Lists the specified log groups. You can list all your log groups or filter the results by prefix. The results are ASCII-sorted by log group name.
CloudWatch Logs doesn't support IAM policies that control access to the If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability. |
|
DescribeLogGroupsAsync(CancellationToken) |
Lists the specified log groups. You can list all your log groups or filter the results by prefix. The results are ASCII-sorted by log group name.
CloudWatch Logs doesn't support IAM policies that control access to the If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability. |
|
DescribeLogGroupsAsync(DescribeLogGroupsRequest, CancellationToken) |
Lists the specified log groups. You can list all your log groups or filter the results by prefix. The results are ASCII-sorted by log group name.
CloudWatch Logs doesn't support IAM policies that control access to the If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability. |
|
DescribeLogStreams(DescribeLogStreamsRequest) |
Lists the log streams for the specified log group. You can list all the log streams or filter the results by prefix. You can also control how the results are ordered.
You can specify the log group to search by using either This operation has a limit of five transactions per second, after which transactions are throttled. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability. |
|
DescribeLogStreamsAsync(DescribeLogStreamsRequest, CancellationToken) |
Lists the log streams for the specified log group. You can list all the log streams or filter the results by prefix. You can also control how the results are ordered.
You can specify the log group to search by using either This operation has a limit of five transactions per second, after which transactions are throttled. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability. |
|
DescribeMetricFilters(DescribeMetricFiltersRequest) |
Lists the specified metric filters. You can list all of the metric filters or filter the results by log name, prefix, metric name, or metric namespace. The results are ASCII-sorted by filter name. |
|
DescribeMetricFiltersAsync(DescribeMetricFiltersRequest, CancellationToken) |
Lists the specified metric filters. You can list all of the metric filters or filter the results by log name, prefix, metric name, or metric namespace. The results are ASCII-sorted by filter name. |
|
DescribeQueries(DescribeQueriesRequest) |
Returns a list of CloudWatch Logs Insights queries that are scheduled, running, or have been run recently in this account. You can request all queries or limit it to queries of a specific log group or queries with a certain status. |
|
DescribeQueriesAsync(DescribeQueriesRequest, CancellationToken) |
Returns a list of CloudWatch Logs Insights queries that are scheduled, running, or have been run recently in this account. You can request all queries or limit it to queries of a specific log group or queries with a certain status. |
|
DescribeQueryDefinitions(DescribeQueryDefinitionsRequest) |
This operation returns a paginated list of your saved CloudWatch Logs Insights query definitions. You can retrieve query definitions from the current account or from a source account that is linked to the current account.
You can use the |
|
DescribeQueryDefinitionsAsync(DescribeQueryDefinitionsRequest, CancellationToken) |
This operation returns a paginated list of your saved CloudWatch Logs Insights query definitions. You can retrieve query definitions from the current account or from a source account that is linked to the current account.
You can use the |
|
DescribeResourcePolicies(DescribeResourcePoliciesRequest) |
Lists the resource policies in this account. |
|
DescribeResourcePoliciesAsync(DescribeResourcePoliciesRequest, CancellationToken) |
Lists the resource policies in this account. |
|
DescribeSubscriptionFilters(DescribeSubscriptionFiltersRequest) |
Lists the subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name. |
|
DescribeSubscriptionFiltersAsync(DescribeSubscriptionFiltersRequest, CancellationToken) |
Lists the subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name. |
|
DetermineServiceOperationEndpoint(AmazonWebServiceRequest) |
Returns the endpoint that will be used for a particular request. |
|
DisassociateKmsKey(DisassociateKmsKeyRequest) |
Disassociates the specified KMS key from the specified log group or from all CloudWatch Logs Insights query results in the account.
When you use
It can take up to 5 minutes for this operation to take effect. |
|
DisassociateKmsKeyAsync(DisassociateKmsKeyRequest, CancellationToken) |
Disassociates the specified KMS key from the specified log group or from all CloudWatch Logs Insights query results in the account.
When you use
It can take up to 5 minutes for this operation to take effect. |
|
Dispose() | Inherited from Amazon.Runtime.AmazonServiceClient. | |
FilterLogEvents(FilterLogEventsRequest) |
Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream.
You must have the
You can specify the log group to search by using either By default, this operation returns as many log events as can fit in 1 MB (up to 10,000 log events) or all the events found within the specified time range. If the results include a token, that means there are more log events available. You can get additional results by specifying the token in a subsequent call. This operation can return empty results while there are more log events available through the token.
The returned log events are sorted by event timestamp, the timestamp when the event
was ingested by CloudWatch Logs, and the ID of the If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability. |
|
FilterLogEventsAsync(FilterLogEventsRequest, CancellationToken) |
Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream.
You must have the
You can specify the log group to search by using either By default, this operation returns as many log events as can fit in 1 MB (up to 10,000 log events) or all the events found within the specified time range. If the results include a token, that means there are more log events available. You can get additional results by specifying the token in a subsequent call. This operation can return empty results while there are more log events available through the token.
The returned log events are sorted by event timestamp, the timestamp when the event
was ingested by CloudWatch Logs, and the ID of the If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability. |
|
GetDataProtectionPolicy(GetDataProtectionPolicyRequest) |
Returns information about a log group data protection policy. |
|
GetDataProtectionPolicyAsync(GetDataProtectionPolicyRequest, CancellationToken) |
Returns information about a log group data protection policy. |
|
GetDelivery(GetDeliveryRequest) |
Returns complete information about one logical delivery. A delivery is a connection between a delivery source and a delivery destination . A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services.
You need to specify the delivery |
|
GetDeliveryAsync(GetDeliveryRequest, CancellationToken) |
Returns complete information about one logical delivery. A delivery is a connection between a delivery source and a delivery destination . A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services.
You need to specify the delivery |
|
GetDeliveryDestination(GetDeliveryDestinationRequest) |
Retrieves complete information about one delivery destination. |
|
GetDeliveryDestinationAsync(GetDeliveryDestinationRequest, CancellationToken) |
Retrieves complete information about one delivery destination. |
|
GetDeliveryDestinationPolicy(GetDeliveryDestinationPolicyRequest) |
Retrieves the delivery destination policy assigned to the delivery destination that you specify. For more information about delivery destinations and their policies, see PutDeliveryDestinationPolicy. |
|
GetDeliveryDestinationPolicyAsync(GetDeliveryDestinationPolicyRequest, CancellationToken) |
Retrieves the delivery destination policy assigned to the delivery destination that you specify. For more information about delivery destinations and their policies, see PutDeliveryDestinationPolicy. |
|
GetDeliverySource(GetDeliverySourceRequest) |
Retrieves complete information about one delivery source. |
|
GetDeliverySourceAsync(GetDeliverySourceRequest, CancellationToken) |
Retrieves complete information about one delivery source. |
|
GetIntegration(GetIntegrationRequest) |
Returns information about one integration between CloudWatch Logs and OpenSearch Service. |
|
GetIntegrationAsync(GetIntegrationRequest, CancellationToken) |
Returns information about one integration between CloudWatch Logs and OpenSearch Service. |
|
GetLogAnomalyDetector(GetLogAnomalyDetectorRequest) |
Retrieves information about the log anomaly detector that you specify. |
|
GetLogAnomalyDetectorAsync(GetLogAnomalyDetectorRequest, CancellationToken) |
Retrieves information about the log anomaly detector that you specify. |
|
GetLogEvents(GetLogEventsRequest) |
Lists log events from the specified log stream. You can list all of the log events or filter using a time range. By default, this operation returns as many log events as can fit in a response size of 1MB (up to 10,000 log events). You can get additional log events by specifying one of the tokens in a subsequent call. This operation can return empty results while there are more log events available through the token. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.
You can specify the log group to search by using either |
|
GetLogEventsAsync(GetLogEventsRequest, CancellationToken) |
Lists log events from the specified log stream. You can list all of the log events or filter using a time range. By default, this operation returns as many log events as can fit in a response size of 1MB (up to 10,000 log events). You can get additional log events by specifying one of the tokens in a subsequent call. This operation can return empty results while there are more log events available through the token. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.
You can specify the log group to search by using either |
|
GetLogGroupFields(GetLogGroupFieldsRequest) |
Returns a list of the fields that are included in log events in the specified log group. Includes the percentage of log events that contain each field. The search is limited to a time period that you specify.
You can specify the log group to search by using either
In the results, fields that start with The response results are sorted by the frequency percentage, starting with the highest percentage. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability. |
|
GetLogGroupFieldsAsync(GetLogGroupFieldsRequest, CancellationToken) |
Returns a list of the fields that are included in log events in the specified log group. Includes the percentage of log events that contain each field. The search is limited to a time period that you specify.
You can specify the log group to search by using either
In the results, fields that start with The response results are sorted by the frequency percentage, starting with the highest percentage. If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability. |
|
GetLogRecord(GetLogRecordRequest) |
Retrieves all of the fields and values of a single log event. All fields are retrieved,
even if the original query that produced the
The full unparsed log event is returned within |
|
GetLogRecordAsync(GetLogRecordRequest, CancellationToken) |
Retrieves all of the fields and values of a single log event. All fields are retrieved,
even if the original query that produced the
The full unparsed log event is returned within |
|
GetQueryResults(GetQueryResultsRequest) |
Returns the results from the specified query.
Only the fields requested in the query are returned, along with a
If the value of the If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start queries in linked source accounts. For more information, see CloudWatch cross-account observability. |
|
GetQueryResultsAsync(GetQueryResultsRequest, CancellationToken) |
Returns the results from the specified query.
Only the fields requested in the query are returned, along with a
If the value of the If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start queries in linked source accounts. For more information, see CloudWatch cross-account observability. |
|
GetTransformer(GetTransformerRequest) |
Returns the information about the log transformer associated with this log group. This operation returns data only for transformers created at the log group level. To get information for an account-level transformer, use DescribeAccountPolicies. |
|
GetTransformerAsync(GetTransformerRequest, CancellationToken) |
Returns the information about the log transformer associated with this log group. This operation returns data only for transformers created at the log group level. To get information for an account-level transformer, use DescribeAccountPolicies. |
|
ListAnomalies(ListAnomaliesRequest) |
Returns a list of anomalies that log anomaly detectors have found. For details about the structure format of each anomaly object that is returned, see the example in this section. |
|
ListAnomaliesAsync(ListAnomaliesRequest, CancellationToken) |
Returns a list of anomalies that log anomaly detectors have found. For details about the structure format of each anomaly object that is returned, see the example in this section. |
|
ListIntegrations(ListIntegrationsRequest) |
Returns a list of integrations between CloudWatch Logs and other services in this account. Currently, only one integration can be created in an account, and this integration must be with OpenSearch Service. |
|
ListIntegrationsAsync(ListIntegrationsRequest, CancellationToken) |
Returns a list of integrations between CloudWatch Logs and other services in this account. Currently, only one integration can be created in an account, and this integration must be with OpenSearch Service. |
|
ListLogAnomalyDetectors(ListLogAnomalyDetectorsRequest) |
Retrieves a list of the log anomaly detectors in the account. |
|
ListLogAnomalyDetectorsAsync(ListLogAnomalyDetectorsRequest, CancellationToken) |
Retrieves a list of the log anomaly detectors in the account. |
|
ListLogGroupsForQuery(ListLogGroupsForQueryRequest) |
Returns a list of the log groups that were analyzed during a single CloudWatch Logs
Insights query. This can be useful for queries that use log group name prefixes or
the For more information about field indexes, see Create field indexes to improve query performance and reduce costs. |
|
ListLogGroupsForQueryAsync(ListLogGroupsForQueryRequest, CancellationToken) |
Returns a list of the log groups that were analyzed during a single CloudWatch Logs
Insights query. This can be useful for queries that use log group name prefixes or
the For more information about field indexes, see Create field indexes to improve query performance and reduce costs. |
|
ListTagsForResource(ListTagsForResourceRequest) |
Displays the tags associated with a CloudWatch Logs resource. Currently, log groups and destinations support tagging. |
|
ListTagsForResourceAsync(ListTagsForResourceRequest, CancellationToken) |
Displays the tags associated with a CloudWatch Logs resource. Currently, log groups and destinations support tagging. |
|
ListTagsLogGroup(ListTagsLogGroupRequest) |
The ListTagsLogGroup operation is on the path to deprecation. We recommend that you
use ListTagsForResource
instead.
Lists the tags for the specified log group. |
|
ListTagsLogGroupAsync(ListTagsLogGroupRequest, CancellationToken) |
The ListTagsLogGroup operation is on the path to deprecation. We recommend that you
use ListTagsForResource
instead.
Lists the tags for the specified log group. |
|
PutAccountPolicy(PutAccountPolicyRequest) |
Creates an account-level data protection policy, subscription filter policy, or field index policy that applies to all log groups or a subset of log groups in the account. Data protection policy A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy. Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked.
If you use
By default, when a user views a log event that includes masked data, the sensitive
data is replaced by asterisks. A user who has the For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.
To use the
The Subscription filter policy A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams, Firehose, and Lambda. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format. The following destinations are supported for subscription filters:
Each account can have one account-level subscription filter policy per Region. If
you are updating an existing filter, you must specify the correct name in Transformer policy Creates or updates a log transformer policy for your account. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contain relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs this transformation at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filters. You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region. A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. For more information about the available processors to use in a transformer, see Processors that you can use. Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies. You can create transformers only for the log groups in the Standard log class.
You can have one account-level transformer policy that applies to all log groups in
the account. Or you can create as many as 20 account-level transformer policies that
are each scoped to a subset of log groups with the
You can also set up a transformer at the log-group level. For more information, see
PutTransformer.
If there is both a log-group level transformer created with Field index policy You can use field index policies to create indexes on fields found in log events in the log group. Creating field indexes can help lower the scan volume for CloudWatch Logs Insights queries that reference those fields, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, user IDs, or instance IDs. For more information, see Create field indexes to improve query performance and reduce costs To find the fields that are in your log group events, use the GetLogGroupFields operation.
For example, suppose you have created a field index for
Matches of log events to the names of indexed fields are case-sensitive. For example,
an indexed field of
You can have one account-level field index policy that applies to all log groups in
the account. Or you can create as many as 20 account-level field index policies that
are each scoped to a subset of log groups with the If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only to the monitoring account and not to any source accounts.
If you want to create a field index policy for a single log group, you can use PutIndexPolicy
instead of |
|
PutAccountPolicyAsync(PutAccountPolicyRequest, CancellationToken) |
Creates an account-level data protection policy, subscription filter policy, or field index policy that applies to all log groups or a subset of log groups in the account. Data protection policy A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy. Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked.
If you use
By default, when a user views a log event that includes masked data, the sensitive
data is replaced by asterisks. A user who has the For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.
To use the
The Subscription filter policy A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams, Firehose, and Lambda. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format. The following destinations are supported for subscription filters:
Each account can have one account-level subscription filter policy per Region. If
you are updating an existing filter, you must specify the correct name in Transformer policy Creates or updates a log transformer policy for your account. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contain relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs this transformation at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filters. You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region. A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. For more information about the available processors to use in a transformer, see Processors that you can use. Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies. You can create transformers only for the log groups in the Standard log class.
You can have one account-level transformer policy that applies to all log groups in
the account. Or you can create as many as 20 account-level transformer policies that
are each scoped to a subset of log groups with the
You can also set up a transformer at the log-group level. For more information, see
PutTransformer.
If there is both a log-group level transformer created with Field index policy You can use field index policies to create indexes on fields found in log events in the log group. Creating field indexes can help lower the scan volume for CloudWatch Logs Insights queries that reference those fields, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, user IDs, or instance IDs. For more information, see Create field indexes to improve query performance and reduce costs To find the fields that are in your log group events, use the GetLogGroupFields operation.
For example, suppose you have created a field index for
Matches of log events to the names of indexed fields are case-sensitive. For example,
an indexed field of
You can have one account-level field index policy that applies to all log groups in
the account. Or you can create as many as 20 account-level field index policies that
are each scoped to a subset of log groups with the If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only to the monitoring account and not to any source accounts.
If you want to create a field index policy for a single log group, you can use PutIndexPolicy
instead of |
|
PutDataProtectionPolicy(PutDataProtectionPolicyRequest) |
Creates a data protection policy for the specified log group. A data protection policy
can help safeguard sensitive data that's ingested by the log group by auditing and
masking the sensitive log data.
Sensitive data is detected and masked when it is ingested into the log group. When
you set a data protection policy, log events ingested into the log group before that
time are not masked.
By default, when a user views a log event that includes masked data, the sensitive
data is replaced by asterisks. A user who has the For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.
The |
|
PutDataProtectionPolicyAsync(PutDataProtectionPolicyRequest, CancellationToken) |
Creates a data protection policy for the specified log group. A data protection policy
can help safeguard sensitive data that's ingested by the log group by auditing and
masking the sensitive log data.
Sensitive data is detected and masked when it is ingested into the log group. When
you set a data protection policy, log events ingested into the log group before that
time are not masked.
By default, when a user views a log event that includes masked data, the sensitive
data is replaced by asterisks. A user who has the For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.
The |
|
PutDeliveryDestination(PutDeliveryDestinationRequest) |
Creates or updates a logical delivery destination. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, and Firehose are supported as logs delivery destinations. To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services. If you use this operation to update an existing delivery destination, all the current delivery destination parameters are overwritten with the new parameter values that you specify. |
|
PutDeliveryDestinationAsync(PutDeliveryDestinationRequest, CancellationToken) |
Creates or updates a logical delivery destination. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, and Firehose are supported as logs delivery destinations. To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services. If you use this operation to update an existing delivery destination, all the current delivery destination parameters are overwritten with the new parameter values that you specify. |
|
PutDeliveryDestinationPolicy(PutDeliveryDestinationPolicyRequest) |
Creates and assigns an IAM policy that grants permissions to CloudWatch Logs to deliver logs cross-account to a specified destination in this account. To configure the delivery of logs from an Amazon Web Services service in another account to a logs delivery destination in the current account, you must do the following:
Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services. The contents of the policy must include two statements. One statement enables general logs delivery, and the other allows delivery to the chosen destination. See the examples for the needed policies. |
|
PutDeliveryDestinationPolicyAsync(PutDeliveryDestinationPolicyRequest, CancellationToken) |
Creates and assigns an IAM policy that grants permissions to CloudWatch Logs to deliver logs cross-account to a specified destination in this account. To configure the delivery of logs from an Amazon Web Services service in another account to a logs delivery destination in the current account, you must do the following:
Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services. The contents of the policy must include two statements. One statement enables general logs delivery, and the other allows delivery to the chosen destination. See the examples for the needed policies. |
|
PutDeliverySource(PutDeliverySourceRequest) |
Creates or updates a logical delivery source. A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. To configure logs delivery between a delivery destination and an Amazon Web Services service that is supported as a delivery source, you must do the following:
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services. If you use this operation to update an existing delivery source, all the current delivery source parameters are overwritten with the new parameter values that you specify. |
|
PutDeliverySourceAsync(PutDeliverySourceRequest, CancellationToken) |
Creates or updates a logical delivery source. A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. To configure logs delivery between a delivery destination and an Amazon Web Services service that is supported as a delivery source, you must do the following:
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination. Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services. If you use this operation to update an existing delivery source, all the current delivery source parameters are overwritten with the new parameter values that you specify. |
|
PutDestination(PutDestinationRequest) |
Creates or updates a destination. This operation is used only to create destinations for cross-account subscriptions. A destination encapsulates a physical resource (such as an Amazon Kinesis stream). With a destination, you can subscribe to a real-time stream of log events for a different account, ingested using PutLogEvents.
Through an access policy, a destination controls what is written to it. By default,
To perform a |
|
PutDestinationAsync(PutDestinationRequest, CancellationToken) |
Creates or updates a destination. This operation is used only to create destinations for cross-account subscriptions. A destination encapsulates a physical resource (such as an Amazon Kinesis stream). With a destination, you can subscribe to a real-time stream of log events for a different account, ingested using PutLogEvents.
Through an access policy, a destination controls what is written to it. By default,
To perform a |
|
PutDestinationPolicy(PutDestinationPolicyRequest) |
Creates or updates an access policy associated with an existing destination. An access policy is an IAM policy document that is used to authorize claims to register a subscription filter against a given destination. |
|
PutDestinationPolicyAsync(PutDestinationPolicyRequest, CancellationToken) |
Creates or updates an access policy associated with an existing destination. An access policy is an IAM policy document that is used to authorize claims to register a subscription filter against a given destination. |
|
PutIndexPolicy(PutIndexPolicyRequest) |
Creates or updates a field index policy for the specified log group. Only log groups in the Standard log class support field index policies. For more information about log classes, see Log classes. You can use field index policies to create field indexes on fields found in log events in the log group. Creating field indexes speeds up and lowers the costs for CloudWatch Logs Insights queries that reference those field indexes, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, userID, and instance IDs. For more information, see Create field indexes to improve query performance and reduce costs. To find the fields that are in your log group events, use the GetLogGroupFields operation.
For example, suppose you have created a field index for Each index policy has the following quotas and restrictions:
Matches of log events to the names of indexed fields are case-sensitive. For example,
a field index of
Log group-level field index policies created with |
|
PutIndexPolicyAsync(PutIndexPolicyRequest, CancellationToken) |
Creates or updates a field index policy for the specified log group. Only log groups in the Standard log class support field index policies. For more information about log classes, see Log classes. You can use field index policies to create field indexes on fields found in log events in the log group. Creating field indexes speeds up and lowers the costs for CloudWatch Logs Insights queries that reference those field indexes, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, userID, and instance IDs. For more information, see Create field indexes to improve query performance and reduce costs. To find the fields that are in your log group events, use the GetLogGroupFields operation.
For example, suppose you have created a field index for Each index policy has the following quotas and restrictions:
Matches of log events to the names of indexed fields are case-sensitive. For example,
a field index of
Log group-level field index policies created with |
|
PutIntegration(PutIntegrationRequest) |
Creates an integration between CloudWatch Logs and another service in this account. Currently, only integrations with OpenSearch Service are supported, and currently you can have only one integration in your account. Integrating with OpenSearch Service makes it possible for you to create curated vended logs dashboards, powered by OpenSearch Service analytics. For more information, see Vended log dashboards powered by Amazon OpenSearch Service. You can use this operation only to create a new integration. You can't modify an existing integration. |
|
PutIntegrationAsync(PutIntegrationRequest, CancellationToken) |
Creates an integration between CloudWatch Logs and another service in this account. Currently, only integrations with OpenSearch Service are supported, and currently you can have only one integration in your account. Integrating with OpenSearch Service makes it possible for you to create curated vended logs dashboards, powered by OpenSearch Service analytics. For more information, see Vended log dashboards powered by Amazon OpenSearch Service. You can use this operation only to create a new integration. You can't modify an existing integration. |
|
PutLogEvents(PutLogEventsRequest) |
Uploads a batch of log events to the specified log stream.
The sequence token is now ignored in The batch of events must satisfy the following constraints:
If a call to |
|
PutLogEventsAsync(PutLogEventsRequest, CancellationToken) |
Uploads a batch of log events to the specified log stream.
The sequence token is now ignored in The batch of events must satisfy the following constraints:
If a call to |
|
PutMetricFilter(PutMetricFilterRequest) |
Creates or updates a metric filter and associates it with the specified log group. With metric filters, you can configure rules to extract metric data from log events ingested through PutLogEvents. The maximum number of metric filters that can be associated with a log group is 100. Using regular expressions to create metric filters is supported. For these filters, there is a quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in metric filters, see Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail. When you create a metric filter, you can also optionally assign a unit and dimensions to the metric that is created.
Metrics extracted from log events are charged as custom metrics. To prevent unexpected
high charges, do not specify high-cardinality fields such as CloudWatch Logs might disable a metric filter if it generates 1,000 different name/value pairs for your specified dimensions within one hour. You can also set up a billing alarm to alert you if your charges are higher than expected. For more information, see Creating a Billing Alarm to Monitor Your Estimated Amazon Web Services Charges. |
|
PutMetricFilterAsync(PutMetricFilterRequest, CancellationToken) |
Creates or updates a metric filter and associates it with the specified log group. With metric filters, you can configure rules to extract metric data from log events ingested through PutLogEvents. The maximum number of metric filters that can be associated with a log group is 100. Using regular expressions to create metric filters is supported. For these filters, there is a quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in metric filters, see Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail. When you create a metric filter, you can also optionally assign a unit and dimensions to the metric that is created.
Metrics extracted from log events are charged as custom metrics. To prevent unexpected
high charges, do not specify high-cardinality fields such as CloudWatch Logs might disable a metric filter if it generates 1,000 different name/value pairs for your specified dimensions within one hour. You can also set up a billing alarm to alert you if your charges are higher than expected. For more information, see Creating a Billing Alarm to Monitor Your Estimated Amazon Web Services Charges. |
|
PutQueryDefinition(PutQueryDefinitionRequest) |
Creates or updates a query definition for CloudWatch Logs Insights. For more information, see Analyzing Log Data with CloudWatch Logs Insights.
To update a query definition, specify its
You must have the |
|
PutQueryDefinitionAsync(PutQueryDefinitionRequest, CancellationToken) |
Creates or updates a query definition for CloudWatch Logs Insights. For more information, see Analyzing Log Data with CloudWatch Logs Insights.
To update a query definition, specify its
You must have the |
|
PutResourcePolicy(PutResourcePolicyRequest) |
Creates or updates a resource policy allowing other Amazon Web Services services to put log events to this account, such as Amazon Route 53. An account can have up to 10 resource policies per Amazon Web Services Region. |
|
PutResourcePolicyAsync(PutResourcePolicyRequest, CancellationToken) |
Creates or updates a resource policy allowing other Amazon Web Services services to put log events to this account, such as Amazon Route 53. An account can have up to 10 resource policies per Amazon Web Services Region. |
|
PutRetentionPolicy(PutRetentionPolicyRequest) |
Sets the retention of the specified log group. With a retention policy, you can configure
the number of days for which to retain log events in the specified log group.
CloudWatch Logs doesn't immediately delete log events when they reach their retention
setting. It typically takes up to 72 hours after that before log events are deleted,
but in rare situations might take longer.
To illustrate, imagine that you change a log group to have a longer retention setting
when it contains log events that are past the expiration date, but haven't been deleted.
Those log events will take up to 72 hours to be deleted after the new retention date
is reached. To make sure that log data is deleted permanently, keep a log group at
its lower retention setting until 72 hours after the previous retention period ends.
Alternatively, wait to change the retention setting until you confirm that the earlier
log events are deleted.
When log events reach their retention setting they are marked for deletion. After
they are marked for deletion, they do not add to your archival storage costs anymore,
even if they are not actually deleted until later. These log events marked for deletion
are also not included when you use an API to retrieve the |
|
PutRetentionPolicyAsync(PutRetentionPolicyRequest, CancellationToken) |
Sets the retention of the specified log group. With a retention policy, you can configure
the number of days for which to retain log events in the specified log group.
CloudWatch Logs doesn't immediately delete log events when they reach their retention
setting. It typically takes up to 72 hours after that before log events are deleted,
but in rare situations might take longer.
To illustrate, imagine that you change a log group to have a longer retention setting
when it contains log events that are past the expiration date, but haven't been deleted.
Those log events will take up to 72 hours to be deleted after the new retention date
is reached. To make sure that log data is deleted permanently, keep a log group at
its lower retention setting until 72 hours after the previous retention period ends.
Alternatively, wait to change the retention setting until you confirm that the earlier
log events are deleted.
When log events reach their retention setting they are marked for deletion. After
they are marked for deletion, they do not add to your archival storage costs anymore,
even if they are not actually deleted until later. These log events marked for deletion
are also not included when you use an API to retrieve the |
|
PutSubscriptionFilter(PutSubscriptionFilterRequest) |
Creates or updates a subscription filter and associates it with the specified log group. With subscription filters, you can subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format. The following destinations are supported for subscription filters:
Each log group can have up to two subscription filters associated with it. If you
are updating an existing filter, you must specify the correct name in Using regular expressions to create subscription filters is supported. For these filters, there is a quotas of quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in subscription filters, see Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail.
To perform a |
|
PutSubscriptionFilterAsync(PutSubscriptionFilterRequest, CancellationToken) |
Creates or updates a subscription filter and associates it with the specified log group. With subscription filters, you can subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format. The following destinations are supported for subscription filters:
Each log group can have up to two subscription filters associated with it. If you
are updating an existing filter, you must specify the correct name in Using regular expressions to create subscription filters is supported. For these filters, there is a quotas of quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in subscription filters, see Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail.
To perform a |
|
PutTransformer(PutTransformerRequest) |
Creates or updates a log transformer for a single log group. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contains relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs the transformations at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filers. You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region. A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. The processors work one after another, in the order that you list them, like a pipeline. For more information about the available processors to use in a transformer, see Processors that you can use. Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies. You can create transformers only for the log groups in the Standard log class.
You can also set up a transformer at the account level. For more information, see
PutAccountPolicy.
If there is both a log-group level transformer created with |
|
PutTransformerAsync(PutTransformerRequest, CancellationToken) |
Creates or updates a log transformer for a single log group. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contains relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs the transformations at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filers. You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region. A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. The processors work one after another, in the order that you list them, like a pipeline. For more information about the available processors to use in a transformer, see Processors that you can use. Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies. You can create transformers only for the log groups in the Standard log class.
You can also set up a transformer at the account level. For more information, see
PutAccountPolicy.
If there is both a log-group level transformer created with |
|
StartLiveTail(StartLiveTailRequest) |
Starts a Live Tail streaming session for one or more log groups. A Live Tail session returns a stream of log events that have been recently ingested in the log groups. For more information, see Use Live Tail to view logs in near real time. The response to this operation is a response stream, over which the server sends live log events and the client receives them. The following objects are sent over the stream:
You can end a session before it times out by closing the session stream or by closing the client that is receiving the stream. The session also ends if the established connection between the client and the server breaks. For examples of using an SDK to start a Live Tail session, see Start a Live Tail session using an Amazon Web Services SDK. |
|
StartLiveTailAsync(StartLiveTailRequest, CancellationToken) |
Starts a Live Tail streaming session for one or more log groups. A Live Tail session returns a stream of log events that have been recently ingested in the log groups. For more information, see Use Live Tail to view logs in near real time. The response to this operation is a response stream, over which the server sends live log events and the client receives them. The following objects are sent over the stream:
You can end a session before it times out by closing the session stream or by closing the client that is receiving the stream. The session also ends if the established connection between the client and the server breaks. For examples of using an SDK to start a Live Tail session, see Start a Live Tail session using an Amazon Web Services SDK. |
|
StartQuery(StartQueryRequest) |
Starts a query of one or more log groups using CloudWatch Logs Insights. You specify the log groups and time range to query and the query string to use. For more information, see CloudWatch Logs Insights Query Syntax.
After you run a query using
To specify the log groups to query, a
If you have associated a KMS key with the query results in this account, then StartQuery uses that key to encrypt the results when it stores them. If no key is associated with query results, the query results are encrypted with the default CloudWatch Logs encryption method. Queries time out after 60 minutes of runtime. If your queries are timing out, reduce the time range being searched or partition your query into a number of queries.
If you are using CloudWatch cross-account observability, you can use this operation
in a monitoring account to start a query in a linked source account. For more information,
see CloudWatch
cross-account observability. For a cross-account You can have up to 30 concurrent CloudWatch Logs insights queries, including queries that have been added to dashboards. |
|
StartQueryAsync(StartQueryRequest, CancellationToken) |
Starts a query of one or more log groups using CloudWatch Logs Insights. You specify the log groups and time range to query and the query string to use. For more information, see CloudWatch Logs Insights Query Syntax.
After you run a query using
To specify the log groups to query, a
If you have associated a KMS key with the query results in this account, then StartQuery uses that key to encrypt the results when it stores them. If no key is associated with query results, the query results are encrypted with the default CloudWatch Logs encryption method. Queries time out after 60 minutes of runtime. If your queries are timing out, reduce the time range being searched or partition your query into a number of queries.
If you are using CloudWatch cross-account observability, you can use this operation
in a monitoring account to start a query in a linked source account. For more information,
see CloudWatch
cross-account observability. For a cross-account You can have up to 30 concurrent CloudWatch Logs insights queries, including queries that have been added to dashboards. |
|
StopQuery(StopQueryRequest) |
Stops a CloudWatch Logs Insights query that is in progress. If the query has already ended, the operation returns an error indicating that the specified query is not running. |
|
StopQueryAsync(StopQueryRequest, CancellationToken) |
Stops a CloudWatch Logs Insights query that is in progress. If the query has already ended, the operation returns an error indicating that the specified query is not running. |
|
TagLogGroup(TagLogGroupRequest) |
The TagLogGroup operation is on the path to deprecation. We recommend that you use
TagResource
instead.
Adds or updates the specified tags for the specified log group. To list the tags for a log group, use ListTagsForResource. To remove tags, use UntagResource. For more information about tags, see Tag Log Groups in Amazon CloudWatch Logs in the Amazon CloudWatch Logs User Guide.
CloudWatch Logs doesn't support IAM policies that prevent users from assigning specified
tags to log groups using the |
|
TagLogGroupAsync(TagLogGroupRequest, CancellationToken) |
The TagLogGroup operation is on the path to deprecation. We recommend that you use
TagResource
instead.
Adds or updates the specified tags for the specified log group. To list the tags for a log group, use ListTagsForResource. To remove tags, use UntagResource. For more information about tags, see Tag Log Groups in Amazon CloudWatch Logs in the Amazon CloudWatch Logs User Guide.
CloudWatch Logs doesn't support IAM policies that prevent users from assigning specified
tags to log groups using the |
|
TagResource(TagResourceRequest) |
Assigns one or more tags (key-value pairs) to the specified CloudWatch Logs resource. Currently, the only CloudWatch Logs resources that can be tagged are log groups and destinations. Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values. Tags don't have any semantic meaning to Amazon Web Services and are interpreted strictly as strings of characters.
You can use the You can associate as many as 50 tags with a CloudWatch Logs resource. |
|
TagResourceAsync(TagResourceRequest, CancellationToken) |
Assigns one or more tags (key-value pairs) to the specified CloudWatch Logs resource. Currently, the only CloudWatch Logs resources that can be tagged are log groups and destinations. Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values. Tags don't have any semantic meaning to Amazon Web Services and are interpreted strictly as strings of characters.
You can use the You can associate as many as 50 tags with a CloudWatch Logs resource. |
|
TestMetricFilter(TestMetricFilterRequest) |
Tests the filter pattern of a metric filter against a sample of log event messages. You can use this operation to validate the correctness of a metric filter pattern. |
|
TestMetricFilterAsync(TestMetricFilterRequest, CancellationToken) |
Tests the filter pattern of a metric filter against a sample of log event messages. You can use this operation to validate the correctness of a metric filter pattern. |
|
TestTransformer(TestTransformerRequest) |
Use this operation to test a log transformer. You enter the transformer configuration and a set of log events to test with. The operation responds with an array that includes the original log events and the transformed versions. |
|
TestTransformerAsync(TestTransformerRequest, CancellationToken) |
Use this operation to test a log transformer. You enter the transformer configuration and a set of log events to test with. The operation responds with an array that includes the original log events and the transformed versions. |
|
UntagLogGroup(UntagLogGroupRequest) |
The UntagLogGroup operation is on the path to deprecation. We recommend that you use
UntagResource
instead.
Removes the specified tags from the specified log group. To list the tags for a log group, use ListTagsForResource. To add tags, use TagResource.
CloudWatch Logs doesn't support IAM policies that prevent users from assigning specified
tags to log groups using the |
|
UntagLogGroupAsync(UntagLogGroupRequest, CancellationToken) |
The UntagLogGroup operation is on the path to deprecation. We recommend that you use
UntagResource
instead.
Removes the specified tags from the specified log group. To list the tags for a log group, use ListTagsForResource. To add tags, use TagResource.
CloudWatch Logs doesn't support IAM policies that prevent users from assigning specified
tags to log groups using the |
|
UntagResource(UntagResourceRequest) |
Removes one or more tags from the specified resource. |
|
UntagResourceAsync(UntagResourceRequest, CancellationToken) |
Removes one or more tags from the specified resource. |
|
UpdateAnomaly(UpdateAnomalyRequest) |
Use this operation to suppress anomaly detection for a specified anomaly or pattern. If you suppress an anomaly, CloudWatch Logs won't report new occurrences of that anomaly and won't update that anomaly with new data. If you suppress a pattern, CloudWatch Logs won't report any anomalies related to that pattern.
You must specify either
If you have previously used this operation to suppress detection of a pattern or anomaly,
you can use it again to cause CloudWatch Logs to end the suppression. To do this,
use this operation and specify the anomaly or pattern to stop suppressing, and omit
the |
|
UpdateAnomalyAsync(UpdateAnomalyRequest, CancellationToken) |
Use this operation to suppress anomaly detection for a specified anomaly or pattern. If you suppress an anomaly, CloudWatch Logs won't report new occurrences of that anomaly and won't update that anomaly with new data. If you suppress a pattern, CloudWatch Logs won't report any anomalies related to that pattern.
You must specify either
If you have previously used this operation to suppress detection of a pattern or anomaly,
you can use it again to cause CloudWatch Logs to end the suppression. To do this,
use this operation and specify the anomaly or pattern to stop suppressing, and omit
the |
|
UpdateDeliveryConfiguration(UpdateDeliveryConfigurationRequest) |
Use this operation to update the configuration of a delivery to change either the S3 path pattern or the format of the delivered logs. You can't use this operation to change the source or destination of the delivery. |
|
UpdateDeliveryConfigurationAsync(UpdateDeliveryConfigurationRequest, CancellationToken) |
Use this operation to update the configuration of a delivery to change either the S3 path pattern or the format of the delivered logs. You can't use this operation to change the source or destination of the delivery. |
|
UpdateLogAnomalyDetector(UpdateLogAnomalyDetectorRequest) |
Updates an existing log anomaly detector. |
|
UpdateLogAnomalyDetectorAsync(UpdateLogAnomalyDetectorRequest, CancellationToken) |
Updates an existing log anomaly detector. |
Name | Description | |
---|---|---|
AfterResponseEvent | Inherited from Amazon.Runtime.AmazonServiceClient. | |
BeforeRequestEvent | Inherited from Amazon.Runtime.AmazonServiceClient. | |
ExceptionEvent | Inherited from Amazon.Runtime.AmazonServiceClient. |
.NET:
Supported in: 8.0 and newer, Core 3.1
.NET Standard:
Supported in: 2.0
.NET Framework:
Supported in: 4.5 and newer, 3.5