Using the awslogs log driver
You can configure the containers in your tasks to send log information to CloudWatch Logs.
If you do this, you can view the logs from the containers
in your Fargate tasks.
This topic goes over how you can get
started using the awslogs
log driver in your task definitions.
Note
The type of information that is logged by the containers in your task depends
mostly on their ENTRYPOINT
command. By default, the logs that are
captured show the command output that you typically might see in an interactive
terminal if you ran the container locally, which are the STDOUT
and
STDERR
I/O streams. The awslogs
log driver simply
passes these logs from Docker to CloudWatch Logs. For more information about how Docker logs
are processed, including alternative ways to capture different file data or streams,
see View logs for a
container or service
Turning on the awslogs log driver for your containers
If you're using the Fargate launch type for your tasks, you need to
add the required logConfiguration
parameters to your task definition to
turn on the awslogs
log driver. For more information, see Specifying a log configuration in your task
definition.
Creating a log group
The awslogs
log driver can send log streams to an existing log group
in CloudWatch Logs or create a new log group on your behalf. The AWS Management Console provides an
auto-configure option, which creates a log group on your behalf using the task
definition family name with ecs
as the prefix. Alternatively, you can
manually specify your log configuration options and specify the
awslogs-create-group
option with a value of true
,
which creates the log groups on your behalf.
Note
To use the awslogs-create-group
option to have your log group created, your
task execution IAM role policy or EC2 instance role policy must include the logs:CreateLogGroup
permission.
The following code shows how to set the awslogs-create-group
option.
{ "containerDefinitions": [ { "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "firelens-container", "awslogs-region": "us-west-2", "awslogs-create-group": "true", "awslogs-stream-prefix": "firelens" } } ] }
Using the auto-configuration feature to create a log group
When you register a task definition,in the Amazon ECS console, you can allow Amazon ECS to
auto-configure your CloudWatch logs. Doing this causes a log group to be created on
your behalf using the task definition family name with ecs
as the
prefix. For more information, see Creating a task definition using the console.
Available awslogs log driver options
The awslogs
log driver supports the following options in Amazon ECS task
definitions. For more information, see CloudWatch Logs logging
driver
awslogs-create-group
-
Required: No
Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to
false
.Note
Your IAM policy must include the
logs:CreateLogGroup
permission before you attempt to useawslogs-create-group
. awslogs-region
-
Required: Yes
Specify the AWS Region that the
awslogs
log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. awslogs-group
-
Required: Yes
Make sure to specify a log group that the
awslogs
log driver sends its log streams to. For more information, see Creating a log group. awslogs-stream-prefix
-
Required: Yes, when using the Fargate launch type.
Use the
awslogs-stream-prefix
option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the following format.prefix-name
/container-name
/ecs-task-id
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
awslogs-datetime-format
-
Required: No
This option defines a multiline start pattern in Python
strftime
format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format
. You cannot configure both the
awslogs-datetime-format
andawslogs-multiline-pattern
options.Note
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
awslogs-multiline-pattern
-
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern
. This option is ignored if
awslogs-datetime-format
is also configured.You cannot configure both the
awslogs-datetime-format
andawslogs-multiline-pattern
options.Note
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
mode
-
Required: No
Valid values:
non-blocking
|blocking
Default value:
blocking
This option defines the delivery mode of log messages from the container to CloudWatch Logs. The delivery mode you choose affects application availability when the flow of logs from container to CloudWatch is interrupted.
If you use the default
blocking
mode and the flow of logs to CloudWatch is interrupted, calls from container code to write to thestdout
andstderr
streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.If you use the
non-blocking
mode, the container's logs are instead stored in an in-memory intermediate buffer configured with themax-buffer-size
option. This prevents the application from becoming unresponsive when logs cannot be sent to CloudWatch. We recommendusing this mode if you want to ensure service availability and are okay with some log loss. max-buffer-size
-
Required: No
Default value:
1m
When
non-blocking
mode is used, themax-buffer-size
log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
Specifying a log configuration in your task definition
Before your containers can send logs to CloudWatch, you must specify the
awslogs
log driver for containers in your task definition. This
section describes the log configuration for a container to use the
awslogs
log driver. For more information, see Creating a task definition using the console.
The task definition JSON that follows has a logConfiguration
object
specified for each container. One is for the WordPress container that sends logs to
a log group called awslogs-wordpress
. The other is for a MySQL
container that sends logs to a log group that's called awslogs-mysql
.
Both containers use the awslogs-example
log stream prefix.
{ "containerDefinitions": [ { "name": "wordpress", "links": [ "mysql" ], "image": "wordpress", "essential": true, "portMappings": [ { "containerPort": 80, "hostPort": 80 } ], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-create-group": "true", "awslogs-group": "
awslogs-wordpress
", "awslogs-region": "us-west-2
", "awslogs-stream-prefix": "awslogs-example
" } }, "memory": 500, "cpu": 10 }, { "environment": [ { "name": "MYSQL_ROOT_PASSWORD", "value": "password" } ], "name": "mysql", "image": "mysql", "cpu": 10, "memory": 500, "essential": true, "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-create-group": "true", "awslogs-group": "awslogs-mysql
", "awslogs-region": "us-west-2
", "awslogs-stream-prefix": "awslogs-example
", "mode": "non-blocking", "max-buffer-size": "25m" } } } ], "family": "awslogs-example" }
Viewing awslogs container logs in CloudWatch Logs
After your Fargate tasks that use the
awslogs
log driver have launched, your configured containers should
be sending their log data to CloudWatch Logs. You can view and search these logs in the
console.
To view your CloudWatch Logs data for a container from the Amazon ECS console
Open the console at https://console.aws.amazon.com/ecs/v2
. -
On the Clusters page, select the cluster that contains the task to view.
-
On the Cluster:
cluster_name
page, choose Tasks, and then select the task to view. -
On the Task:
task_id
page, under Container details, choose Log configuration to view the logs. -
In the Log Configuration section, choose View logs in CloudWatch, which opens the associated log stream in the CloudWatch console.
To view your CloudWatch Logs data in the CloudWatch console
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/
. -
In the left navigation pane, choose Logs.
-
Select a log group to view. You should see the log groups that you created in Creating a log group.
-
Choose a log stream to view.