Filtering data events by using advanced event selectors
This section describes how you can use advanced event selectors to create fine-grained selectors, which help you control costs by only logging the specific data events of interest.
For example:
-
You can include or exclude specific API calls by adding a filter on the
eventName
field. -
You can include or exclude logging for specific resources by adding a filter on the
resources.ARN
field. For example, if you were logging S3 data events, you could exclude logging for the S3 bucket for your trail. -
You can choose to log only write-only events or read-only events by adding a filter on the
readOnly
field.
The following table provides additional information about the configurable fields for advanced event selectors.
Field | Required | Valid operators | Description |
---|---|---|---|
|
Yes |
|
This field is set to Supported on trails: Yes Supported on event data stores: Yes |
|
Yes |
|
This field is used to select the resource type for which you want to log data events. The Data events table shows the possible values. Supported on trails: Yes Supported on event data stores: Yes |
|
No |
|
This is an optional field used to include or exclude data events based on the Supported on trails: Yes Supported on event data stores: Yes |
|
No |
|
This is an optional filed used to filter in or filter out any data
event logged to CloudTrail, such as If you're using the AWS CLI, you can specify multiple values by separating each value with a comma. If you're using the console, you can specify multiple values by creating a condition for each Supported on trails: Yes Supported on event data stores: Yes |
|
No |
|
This is an optional field used to exclude or include data events
for a specific resource by providing the If you're using the AWS CLI, you can specify multiple values by separating each value with a comma. If you're using the console, you can specify multiple values by creating a condition for each Supported on trails: Yes Supported on event data stores: Yes |
|
No |
|
You can use it to include or exclude specific event sources. The Supported on trails: No Supported on event data stores: Yes |
|
No |
|
The eventType to include or exclude. For example, you can set this field to
Supported on trails: No Supported on event data stores: Yes |
|
No |
|
Include or exclude events originating from an AWS Management Console session.
This field can be set to Supported on trails: No Supported on event data stores: Yes |
|
No |
|
The ARN of an STS assumed role that you want to include or exclude. Supported on trails: No Supported on event data stores: Yes |
To log data events using the CloudTrail console, you choose the Data events option and then select the Resource type of interest when you are creating or updating a trail or event data store. The Data events table shows the possible resource types you can choose on the CloudTrail console.
To log data events with the AWS CLI, configure the
--advanced-event-selector
parameter to set the
eventCategory
equal to Data
and the
resources.type
value equal to the resource type value for which you
want to log data events. The Data
events table lists the available resource types.
For example, if you wanted to log data events for all Cognito Identity pools, you’d
configure the --advanced-event-selectors
parameter to look like
this:
--advanced-event-selectors '[ { "Name": "Log Cognito data events on Identity pools", "FieldSelectors": [ { "Field": "eventCategory", "Equals": ["Data"] }, { "Field": "resources.type", "Equals": ["AWS::Cognito::IdentityPool"] } ] } ]'
The preceding example logs all Cognito data events on Identity pools. You can further
refine the advanced event selectors to filter on the eventName
,
readOnly
, and resources.ARN
fields to log specific events
of interest or exclude events that aren’t of interest.
You can configure advanced event selectors to filter data events based on multiple
fields. For example, you can configure advanced event selectors to log all Amazon S3
PutObject
and DeleteObject
API calls
but exclude event logging for a specific S3 bucket as shown in the following example.
Replace amzn-s3-demo-bucket
with the name of your bucket.
--advanced-event-selectors '[ { "Name": "Log PutObject and DeleteObject events for all but one bucket", "FieldSelectors": [ { "Field": "eventCategory", "Equals": ["Data"] }, { "Field": "resources.type", "Equals": ["AWS::S3::Object"] }, { "Field": "eventName", "Equals": ["PutObject","DeleteObject"] }, { "Field": "resources.ARN", "NotStartsWith": ["arn:aws:s3:::
amzn-s3-demo-bucket
/"] } ] } ]'
You can also include multiple conditions for a field. For information on how multiple conditions are evaluated, see How CloudTrail evaluates multiple conditions for a field.
You can use advanced event selectors to log both management and data events. To log data events for multiple resource types, add a field selector statement for each resource type that you want to log data events for.
Note
Trails can use either basic event selectors or advanced event selectors, but not both. If you apply advanced event selectors to a trail, any existing basic event selectors are overwritten.
Topics
How CloudTrail evaluates multiple conditions for a field
For advanced event selectors, CloudTrail evaluates multiple conditions for a field as follows:
-
DESELECT operators are AND'd together. If any of the DESELECT operator conditions are met, the event is not delivered. These are the valid DESELECT operators for advanced event selectors:
-
NotEndsWith
-
NotEquals
-
NotStartsWith
-
-
SELECT operators are OR'd together. These are the valid SELECT operators for advanced event selectors:
-
EndsWith
-
Equals
-
StartsWith
-
-
Combinations of SELECT and DESELECT operators follow the above rules and both groups are AND'd together.
Example showing multiple conditions for the resources.ARN
field
The following example event selector statement collects data events for the AWS::S3::Object
resource type
and applies multiple conditions on the resources.ARN
field.
{ "Name": "S3Select", "FieldSelectors": [ { "Field": "eventCategory", "Equals": [ "Data" ] }, { "Field": "resources.type", "Equals": [ "AWS::S3::Object" ] }, { "Field": "resources.ARN", "Equals": [ "arn:aws:s3:::amzn-s3-demo-bucket/object1" ], "StartsWith": [ "arn:aws:s3:::amzn-s3-demo-bucket/" ], "EndsWith": [ "object3" ], "NotStartsWith": [ "arn:aws:s3:::amzn-s3-demo-bucket/deselect" ], "NotEndsWith": [ "object5" ], "NotEquals": [ "arn:aws:s3:::amzn-s3-demo-bucket/object6" ] } ] }
In the preceding example, Amazon S3 data events for the AWS::S3::Object
resource will be delivered if:
-
None of these DESELECT operator conditions are met:
-
the
resources.ARN
fieldNotStartsWith
the valuearn:aws:s3:::amzn-s3-demo-bucket/deselect
-
the
resources.ARN
fieldNotEndsWith
the valueobject5
-
the
resources.ARN
fieldNotEquals
the valuearn:aws:s3:::amzn-s3-demo-bucket/object6
-
-
At least one of these SELECT operator conditions is met:
-
the
resources.ARN
fieldEquals
the valuearn:aws:s3:::amzn-s3-demo-bucket/object1
-
the
resources.ARN
fieldStartsWith
the valuearn:aws:s3:::amzn-s3-demo-bucket/
-
the
resources.ARN
fieldEndsWith
the valueobject3
-
Based on the evaluation logic:
-
Data events for
amzn-s3-demo-bucket/object1
will be delivered because it matches the value for theEquals
operator and doesn’t match any of the values for theNotStartsWith
,NotEndsWith
, andNotEquals
operators. -
Data event for
amzn-s3-demo-bucket/object2
will be delivered because it matches the value for theStartsWith
operator and doesn’t match any of the values for theNotStartsWith
,NotEndsWith
, andNotEquals
operators. -
Data events for
amzn-s3-demo-bucket1/object3
will be delivered because it matches theEndsWith
operator and doesn’t match any of the values for theNotStartsWith
,NotEndsWith
, andNotEquals
operators. -
Data events for
arn:aws:s3:::amzn-s3-demo-bucket/deselectObject4
will not be delivered because it matches the condition for theNotStartsWith
even though it matches the condition for theStartsWith
operator. -
Data events for
arn:aws:s3:::amzn-s3-demo-bucket/object5
will not be delivered because it matches the condition for theNotEndsWith
even though it matches the condition for theStartsWith
operator. -
Data events for the
arn:aws:s3:::amzn-s3-demo-bucket/object6
will not be delivered because it matches the condition for theNotEquals
operator even though it matches the condition for theStartsWith
operator.
Filtering data events by
eventName
Using advanced event selectors, you can include or exclude events based on the
value of the eventName
field. Filtering on the eventName
can help control costs, because you avoid incurring costs when the AWS service
you're logging data events for adds support for new data APIs.
You can use any operator with the eventName
field. You can use it to
filter in or filter out any data event logged to CloudTrail, such as PutBucket
or GetSnapshotBlock
.
Topics
Filtering data events
by eventName
using the AWS Management Console
Take the following steps to filter
on the eventName
field using the CloudTrail console.
-
Follow the steps in the create trail procedure, or follow the steps in the create event data store procedure.
-
As you follow the steps to create the trail or event data store, make the following selections:
-
Choose Data events.
-
Choose the Resource type for which you want to log data events.
-
For Log selector template, choose Custom.
-
(Optional) In Selector name, enter a name to identify your selector. The selector name is a descriptive name for an advanced event selector, such as "Log data events for only two S3 buckets". The selector name is listed as
Name
in the advanced event selector and is viewable if you expand the JSON view. -
In Advanced event selectors, do the following to filter on the
eventName
:-
For Field, choose eventName.
-
For Operator, choose the condition operator. In this example, we'll choose equals because we want to log a specific API call.
-
For Value, enter the name of the event you want to filter on.
-
To filter on another
eventName
, choose + Condition. For information about how CloudTrail evaluates multiple conditions, see How CloudTrail evaluates multiple conditions for a field.
-
-
Choose +Field to add filters on other fields.
-
Filtering data events by
eventName
using the AWS CLI
Using the AWS CLI, you can filter on the eventName
field to include
or exclude specific events.
If you’re updating an existing trail or event data store to log additional event selectors, get the current event selectors by running the
get-event-selectors
command for a trail, or the
get-event-data-store
command for an event data store.
Then, update your event selectors to add a field selector for each data resource type that you want to log.
The following example logs S3 data events on a trail. The
--advanced-event-selectors
are configured to only log data
events for the GetObject
, PutObject
, and
DeleteObject
API calls.
aws cloudtrail put-event-selectors \ --trail-name
trailName
\ --advanced-event-selectors '[ { "Name": "Log GetObject, PutObject and DeleteObject S3 data events", "FieldSelectors": [ { "Field": "eventCategory", "Equals": ["Data"] }, { "Field": "resources.type", "Equals": ["AWS::S3::Object"] }, { "Field": "eventName", "Equals": ["GetObject","PutObject","DeleteObject"] } ] } ]'
The next example creates a new event data store that logs data events for EBS
Direct APIs but excludes ListChangedBlocks
API calls. You can use
the update-event-data-store command to update an
existing event data store.
aws cloudtrail create-event-data-store \ --name "
eventDataStoreName
" --advanced-event-selectors '[ { "Name": "Log all EBS Direct API data events except ListChangedBlocks", "FieldSelectors": [ { "Field": "eventCategory", "Equals": ["Data"] }, { "Field": "resources.type", "Equals": ["AWS::EC2::Snapshot"] }, { "Field": "eventName", "NotEquals": ["ListChangedBlocks"] } ] } ]'
Filtering data events by
resources.ARN
Using advanced event selectors, you can filter on the value of the
resources.ARN
field.
You can use any operator with resources.ARN
, but if you use
Equals
or NotEquals
, the value must exactly match the
ARN of a valid resource for the resources.type
value you've specified.
To log all data events for all objects in a specific S3 bucket, use the
StartsWith
operator, and include only the bucket ARN as the
matching value.
The following table shows the valid ARN format for each resources.type
.
Note
You can't use the resources.ARN
field to filter resource types that do not have ARNs.
resources.type | resources.ARN |
---|---|
AWS::DynamoDB::Table1 |
|
AWS::Lambda::Function |
|
|
|
AWS::AppConfig::Configuration |
|
AWS::B2BI::Transformer |
|
AWS::Bedrock::AgentAlias |
|
AWS::Bedrock::FlowAlias |
|
AWS::Bedrock::Guardrail |
|
AWS::Bedrock::KnowledgeBase |
|
AWS::Bedrock::Model |
The ARN must be in one of the following formats:
|
AWS::Cassandra::Table |
|
AWS::CloudFront::KeyValueStore |
|
AWS::CloudTrail::Channel |
|
AWS::CodeGuruProfiler::ProfilingGroup |
|
AWS::CodeWhisperer::Customization |
|
AWS::CodeWhisperer::Profile |
|
AWS::Cognito::IdentityPool |
|
AWS::DataExchange::Asset |
|
AWS::Deadline::Fleet |
|
AWS::Deadline::Job |
|
AWS::Deadline::Queue |
|
AWS::Deadline::Worker |
|
AWS::DynamoDB::Stream |
|
AWS::EC2::Snapshot |
|
AWS::EMRWAL::Workspace |
|
AWS::FinSpace::Environment |
|
AWS::Glue::Table |
|
AWS::GreengrassV2::ComponentVersion |
|
AWS::GreengrassV2::Deployment |
|
AWS::GuardDuty::Detector |
|
AWS::IoT::Certificate |
|
AWS::IoT::Thing |
|
AWS::IoTSiteWise::Asset |
|
AWS::IoTSiteWise::TimeSeries |
|
AWS::IoTTwinMaker::Entity |
|
AWS::IoTTwinMaker::Workspace |
|
AWS::KendraRanking::ExecutionPlan |
|
AWS::Kinesis::Stream |
|
AWS::Kinesis::StreamConsumer |
|
AWS::KinesisVideo::Stream |
|
AWS::GeoMaps::Provider |
|
AWS::GeoPlaces::Provider |
|
AWS::GeoRoutes::Provider |
|
AWS::MachineLearning::MlModel |
|
AWS::ManagedBlockchain::Network |
|
AWS::ManagedBlockchain::Node |
|
AWS::MedicalImaging::Datastore |
|
AWS::MWAA::Environment |
|
AWS::NeptuneGraph::Graph |
|
AWS::One::UKey |
|
AWS::One::User |
|
AWS::PaymentCryptography::Alias |
|
AWS::PaymentCryptography::Key |
|
AWS::PCAConnectorAD::Connector |
|
AWS::PCAConnectorSCEP::Connector |
|
AWS::QApps:QApp |
|
AWS::QBusiness::Application |
|
AWS::QBusiness::DataSource |
|
AWS::QBusiness::Index |
|
AWS::QBusiness::WebExperience |
|
AWS::RDS::DBCluster |
|
AWS::ResourceExplorer2::ManagedView |
|
AWS::ResourceExplorer2::View |
|
AWS::RUM::AppMonitor |
|
|
|
|
|
AWS::S3ObjectLambda::AccessPoint |
|
AWS::S3Outposts::Object |
|
AWS::SageMaker::Endpoint |
|
AWS::SageMaker::ExperimentTrialComponent |
|
AWS::SageMaker::FeatureGroup |
|
AWS::SCN::Instance |
|
AWS::ServiceDiscovery::Namespace |
|
AWS::ServiceDiscovery::Service |
|
AWS::SMSVoice::Message |
|
AWS::SMSVoice::OriginationIdentity |
|
AWS::SNS::PlatformEndpoint |
|
AWS::SNS::Topic |
|
AWS::SocialMessaging::PhoneNumberId |
|
AWS::SocialMessaging::WabaId |
|
AWS::SQS::Queue |
|
AWS::SSM::ManagedNode |
The ARN must be in one of the following formats:
|
AWS::SSMMessages::ControlChannel |
|
AWS::StepFunctions::StateMachine |
The ARN must be in one of the following formats:
|
AWS::SWF::Domain |
|
AWS::ThinClient::Device |
|
AWS::ThinClient::Environment |
|
AWS::Timestream::Database |
|
AWS::Timestream::Table |
|
AWS::VerifiedPermissions::PolicyStore |
|
1 For tables with streams enabled, the resources
field in the data event contains both AWS::DynamoDB::Stream
and AWS::DynamoDB::Table
. If you specify AWS::DynamoDB::Table
for the resources.type
, it will log both DynamoDB table and DynamoDB streams events by default.
To exclude streams events, add a filter on the eventName
field.
2 To log all data events for all
objects in a specific S3 bucket, use the StartsWith
operator, and include only the bucket ARN as the matching value.
The trailing slash is intentional; do not exclude it.
3 To log events on all objects in an S3 access point, we recommend that you use
only the access point ARN, don’t include the object
path, and use the StartsWith
or NotStartsWith
operators.
Topics
Filtering data
events by resources.ARN
using the AWS Management Console
Take the following steps to filter on the resources.ARN
field using the CloudTrail console.
-
Follow the steps in the create trail procedure, or follow the steps in the create event data store procedure.
-
As you follow the steps to create the trail or event data store, make the following selections:
-
Choose Data events.
-
Choose the Resource type for which you want to log data events.
-
For Log selector template, choose Custom.
-
(Optional) In Selector name, enter a name to identify your selector. The selector name is a descriptive name for an advanced event selector, such as "Log data events for only two S3 buckets". The selector name is listed as
Name
in the advanced event selector and is viewable if you expand the JSON view. -
In Advanced event selectors, do the following to filter on the
resources.ARN
:-
For Field, choose resources.ARN.
-
For Operator, choose the condition operator. In this example, we'll choose starts with because we want to log data events for a specific S3 bucket.
-
For Value, enter the ARN for your resource type (for example,
arn:aws:s3:::amzn-s3-demo-bucket
). -
To filter another
resources.ARN
, choose + Condition. For information about how CloudTrail evaluates multiple conditions, see How CloudTrail evaluates multiple conditions for a field.
-
-
Choose +Field to add filters on other fields.
-
Filtering data events by
resources.ARN
using the AWS CLI
Using the AWS CLI, you can filter on the resources.ARN
field to log
events for a specific ARN or exclude logging for a specific ARN.
If you’re updating an existing trail or event data store to log additional event selectors, get the current event selectors by running the
get-event-selectors
command for a trail, or the
get-event-data-store
command for an event data store.
Then, update your event selectors to add a field selector for each data resource type that you want to log.
The following example shows how to configure your trail to include all data
events for all Amazon S3 objects in a specific S3 bucket. The value for S3 events for
the resources.type
field is AWS::S3::Object
. Because
the ARN values for S3 objects and S3 buckets are slightly different, you must
add the StartsWith
operator for resources.ARN
to
capture all events.
aws cloudtrail put-event-selectors \ --trail-name
TrailName
\ --regionregion
\ --advanced-event-selectors \ '[ { "Name": "S3EventSelector", "FieldSelectors": [ { "Field": "eventCategory", "Equals": ["Data"] }, { "Field": "resources.type", "Equals": ["AWS::S3::Object"] }, { "Field": "resources.ARN", "StartsWith": ["arn:aws:s3:::amzn-s3-demo-bucket
/"] } ] } ]'
Filtering data events by
readOnly
value
Using advanced event selectors, you can filter based on the value of the
readOnly
field.
You can only use the Equals
operator with the readOnly
field. You can set the readOnly
value to true
or
false
. If you do not add this field, CloudTrail logs both read and write
events. A value of true
logs only read events. A value of
false
logs only write events.
Topics
Filtering data events
by readOnly
value using the AWS Management Console
Take the following steps to filter on the readOnly
field using the CloudTrail console.
-
Follow the steps in the create trail procedure, or follow the steps in the create event data store procedure.
-
As you follow the steps to create the trail or event data store, make the following selections:
-
Choose Data events.
-
Choose the Resource type for which you want to log data events.
-
For Log selector template, choose the appropriate template for your use case.
If you plan to do this Choose this log selector template Log read events only and apply no other filters (for example, on the
resources.ARN
value).Log readOnly events
Log write events only and apply no other filters (for example, on the
resources.ARN
value).Log writeOnly events
Filter on the
readOnly
value and apply additional filters (for example, on theresources.ARN
value).Custom
In Advanced event selectors, do the following to filter on the
readOnly
value:To log write events
-
For Field, choose readOnly.
-
For Operator, choose equals.
-
For Value, enter
false
. -
Choose +Field to add filters on other fields.
To log read events
-
For Field, choose readOnly.
-
For Operator, choose equals.
-
For Value, enter
true
. -
Choose +Field to add filters on other fields.
-
-
Filtering data events by
readOnly
value using the AWS CLI
Using the AWS CLI, you can filter on the readOnly
field.
You can only use the Equals
operator with the
readOnly
field. You can set the readOnly
value to
true
or false
. If you do not add this field, CloudTrail
logs both read and write events. A value of true
logs only read
events. A value of false
logs only write events.
If you’re updating an existing trail or event data store to log additional event selectors, get the current event selectors by running the
get-event-selectors
command for a trail, or the
get-event-data-store
command for an event data store.
Then, update your event selectors to add a field selector for each data resource type that you want to log.
The following example shows how to configure your trail to log read-only data events for all Amazon S3 objects.
aws cloudtrail put-event-selectors \ --trail-name
TrailName
\ --regionregion
\ --advanced-event-selectors '[ { "Name": "Log read-only S3 data events", "FieldSelectors": [ { "Field": "eventCategory", "Equals": ["Data"] }, { "Field": "resources.type", "Equals": ["AWS::S3::Object"] }, { "Field": "readOnly", "Equals": ["true"] } ] } ]'
The next example creates a new event data store that logs only write-only data events for EBS Direct APIs. You can use the update-event-data-store command to update an existing event data store.
aws cloudtrail create-event-data-store \ --name "
eventDataStoreName
" \ --advanced-event-selectors \ '[ { "Name": "Log write-only EBS Direct API data events", "FieldSelectors": [ { "Field": "eventCategory", "Equals": ["Data"] }, { "Field": "resources.type", "Equals": ["AWS::EC2::Snapshot"] }, { "Field": "readOnly", "Equals": ["false"] } ] } ]'