Menu
Amazon EC2 Container Service
Developer Guide (API Version 2014-11-13)

ecs-cli compose service

Description

Manage Amazon ECS services with docker-compose-style commands on an ECS cluster.

Note

To run tasks with the Amazon ECS CLI instead of creating services, see ecs-cli compose.

The ecs-cli compose service command works with a Docker compose file to create task definitions and manage services. At this time, the Amazon ECS CLI supports Docker compose file syntax versions 1 and 2. By default, the command looks for a compose file in the current directory, called docker-compose.yml. However, you can also specify a different file name or path to a compose file with the --file option. This is especially useful for managing tasks and services from multiple compose files at a time with the Amazon ECS CLI.

The ecs-cli compose service command uses a project name with the task definitions and services that it creates. When the CLI creates a task definition from a compose file, the task definition is called ecscompose-project-name. When the CLI creates a service from a compose file, the service is called ecscompose-service-project-name. By default, the project name is the name of the current working directory. However, you can also specify your own project name with the --project-name option.

Note

The Amazon ECS CLI can only manage tasks, services, and container instances that were created with the CLI. To manage tasks, services, and container instances that were not created by the Amazon ECS CLI, use the AWS Command Line Interface or the AWS Management Console.

The following parameters are supported in compose files for the Amazon ECS CLI:

  • command

  • cpu_shares

  • dns

  • dns_search

  • entrypoint

  • environment: If an environment variable value is not specified in the compose file, but it exists in the shell environment, the shell environment variable value is passed to the task definition that is created for any associated tasks or services.

    Important

    We do not recommend using plaintext environment variables for sensitive information, such as credential data.

  • env_file

    Important

    We do not recommend using plaintext environment variables for sensitive information, such as credential data.

  • extra_hosts

  • hostname

  • image

  • labels

  • links

  • log_driver

  • log_opt

  • mem_limit (in bytes)

  • ports

  • privileged

  • read_only

  • security_opt

  • ulimits

  • user

  • volumes

  • volumes_from

  • working_dir

Important

The build directive is not supported at this time.

For more information about Docker compose file syntax, see the Compose file reference in the Docker documentation.

Syntax

ecs-cli compose [--verbose] [--file compose-file] [--project-name project-name] service [subcommand] [arguments] [--help]

Options

Name Description

--verbose, --debug

Increases the verbosity of command output to aid in diagnostics.

Required: No

--file, -f compose-file

Specifies the Docker compose file to use. At this time, the latest version of the Amazon ECS CLI supports Docker compose file syntax versions 1 and 2. If the COMPOSE_FILE environment variable is set when ecs-cli compose is run, then the Docker compose file is set to the value of that environment variable.

Type: String

Default: ./docker-compose.yml

Required: No

--project-name, -p project-name

Specifies the project name to use. If the COMPOSE_PROJECT_NAME environment variable is set when ecs-cli compose is run, then the project name is set to the value of that environment variable.

Type: String

Default: The current directory name.

Required: No

--help, -h

Shows the help text for the specified command.

Required: No

Available Subcommands

The ecs-cli compose service command supports the following subcommands and arguments:

create [--deployment-max-percent n] [--deployment-min-healthy-percent n] [--load-balancer-name value|--target-group-arn value] [--container-name value] [--container-port value] [--role value]

Creates an ECS service from your compose file. The service is created with a desired count of 0, so no containers are started by this command.

The --deployment-max-percent option specifies the upper limit (as a percentage of the service's desiredCount) of the number of running tasks that can be running in a service during a deployment (the default value is 200). The --deployment-min-healthy-percent option specifies the lower limit (as a percentage of the service's desiredCount) of the number of running tasks that must remain running and healthy in a service during a deployment (the default value is 100). For more information, see maximumPercent and minimumHealthyPercent.

You can optionally run your service behind a load balancer. The load balancer distributes traffic across the tasks that are associated with the service. For more information, see Service Load Balancing. After you create a service, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable.

Note

You must create your load balancer resources in the before you can configure a service to use them. Your load balancer resources should reside in the same VPC as your container instances and they should be configured to use the same subnets. You must also add a security group rule to your container instance security group that allows inbound traffic from your load balancer. For more information, see Creating a Load Balancer.

  • To configure your service to use an existing Elastic Load Balancing Classic Load Balancer, you must specify the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.

  • To configure your service to use an existing Elastic Load Balancing Application Load Balancer, you must specify the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.

start

Starts one copy of each of the containers on the created ECS service. This command updates the desired count of the service to 1.

up [--deployment-max-percent n] [--deployment-min-healthy-percent n] [--load-balancer-name value|--target-group-arn value] [--container-name value] [--container-port value] [--role value]

Creates an ECS service from your compose file (if it does not already exist) and runs one instance of that task on your cluster (a combination of create and start). This command updates the desired count of the service to 1.

The --deployment-max-percent option specifies the upper limit (as a percentage of the service's desiredCount) of the number of running tasks that can be running in a service during a deployment (the default value is 200). The --deployment-min-healthy-percent option specifies the lower limit (as a percentage of the service's desiredCount) of the number of running tasks that must remain running and healthy in a service during a deployment (the default value is 100). For more information, see maximumPercent and minimumHealthyPercent.

You can optionally run your service behind a load balancer. The load balancer distributes traffic across the tasks that are associated with the service. For more information, see Service Load Balancing. After you create a service, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable.

Note

You must create your load balancer resources in the before you can configure a service to use them. Your load balancer resources should reside in the same VPC as your container instances and they should be configured to use the same subnets. You must also add a security group rule to your container instance security group that allows inbound traffic from your load balancer. For more information, see Creating a Load Balancer.

  • To configure your service to use an existing Elastic Load Balancing Classic Load Balancer, you must specify the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.

  • To configure your service to use an existing Elastic Load Balancing Application Load Balancer, you must specify the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.

ps

Lists all the containers in your cluster that belong to the service created with the compose project.

scale [--deployment-max-percent n] [--deployment-min-healthy-percent n] n

Scales the desired count of the service to the specified count.

The --deployment-max-percent option specifies the upper limit (as a percentage of the service's desiredCount) of the number of running tasks that can be running in a service during a deployment (the default value is 200). The --deployment-min-healthy-percent option specifies the lower limit (as a percentage of the service's desiredCount) of the number of running tasks that must remain running and healthy in a service during a deployment (the default value is 100). For more information, see maximumPercent and minimumHealthyPercent.

stop

Stops the running tasks that belong to the service created with the compose project. This command updates the desired count of the service to 0.

rm

Updates the desired count of the service to 0 and then deletes the service.

help

Shows the help text for the specified command.

Examples

Example 1

This example brings up an ECS service with the project name hello-world from the hello-world.yml compose file.

Copy
ecs-cli compose --project-name hello-world --file hello-world.yml service up

Output:

INFO[0001] Using ECS task definition                     TaskDefinition=ecscompose-hello-world:3
INFO[0001] Created an ECS Service                        serviceName=ecscompose-service-hello-world taskDefinition=ecscompose-hello-world:3
INFO[0002] Updated ECS service successfully              desiredCount=1 serviceName=ecscompose-service-hello-world
INFO[0002] Describe ECS Service status                   desiredCount=1 runningCount=0 serviceName=ecscompose-service-hello-world
INFO[0033] Describe ECS Service status                   desiredCount=1 runningCount=0 serviceName=ecscompose-service-hello-world
INFO[0063] Describe ECS Service status                   desiredCount=1 runningCount=0 serviceName=ecscompose-service-hello-world
INFO[0093] Describe ECS Service status                   desiredCount=1 runningCount=0 serviceName=ecscompose-service-hello-world
INFO[0108] ECS Service has reached a stable state        desiredCount=1 runningCount=1 serviceName=ecscompose-service-hello-world

Example 2

This example scales the service created by the hello-world project to a desired count of 2.

Copy
ecs-cli compose --project-name hello-world --file hello-world.yml service scale 2

Output:

INFO[0001] Updated ECS service successfully              desiredCount=2 serviceName=ecscompose-service-hello-world
INFO[0001] Describe ECS Service status                   desiredCount=2 runningCount=1 serviceName=ecscompose-service-hello-world
INFO[0032] Describe ECS Service status                   desiredCount=2 runningCount=1 serviceName=ecscompose-service-hello-world
INFO[0063] ECS Service has reached a stable state        desiredCount=2 runningCount=2 serviceName=ecscompose-service-hello-world

Example 3

This example scales the service created by the hello-world project to a desired count of 0 and then deletes the service.

Copy
ecs-cli compose --project-name hello-world --file hello-world.yml service rm

Output:

INFO[0000] Updated ECS service successfully              desiredCount=0 serviceName=ecscompose-service-hello-world
INFO[0000] Describe ECS Service status                   desiredCount=0 runningCount=2 serviceName=ecscompose-service-hello-world
INFO[0016] ECS Service has reached a stable state        desiredCount=0 runningCount=0 serviceName=ecscompose-service-hello-world
INFO[0016] Deleted ECS service                           service=ecscompose-service-hello-world
INFO[0016] ECS Service has reached a stable state        desiredCount=0 runningCount=0 serviceName=ecscompose-service-hello-world

Example 4

This example creates a service from the nginx-compose.yml compose file and configures it to use an existing Application Load Balancer.

Copy
ecs-cli compose -f nginx-compose.yml service up --target-group-arn arn:aws:elasticloadbalancing:us-east-1:aws_account_id:targetgroup/ecs-cli-alb/9856106fcc5d4be8 --container-name nginx --container-port 80 --role ecsServiceRole

Output:

INFO[0000] Using ECS task definition                     TaskDefinition="ecscompose-ecs-cli:3"
INFO[0001] Created an ECS service                        service=ecscompose-service-ecs-cli taskDefinition="ecscompose-ecs-cli:3"
INFO[0001] Updated ECS service successfully              desiredCount=1 serviceName=ecscompose-service-ecs-cli
INFO[0001] Describe ECS Service status                   desiredCount=1 runningCount=0 serviceName=ecscompose-service-ecs-cli
INFO[0016] ECS Service has reached a stable state        desiredCount=1 runningCount=1 serviceName=ecscompose-service-ecs-cli