Bind mounts
With bind mounts, a file or directory on a host, such as an
Amazon EC2 instance or AWS Fargate, is mounted into a container. Bind mounts are supported for tasks hosted on both Fargate and
Amazon EC2 instances.
By default, bind mounts are tied to the lifecycle of the container using them. Once
all containers using a bind mount are stopped, for example when a task is stopped,
the data is removed. For tasks hosted on Amazon EC2 instances, the
data can be tied to the lifecycle of the host Amazon EC2 instance by specifying a
host
and optional sourcePath
value in your task
definition. For more information, see Using bind mounts
The following are common use cases for bind mounts.
-
To provide an empty data volume to mount in one or more containers.
-
To mount a host data volume in one or more containers.
-
To share a data volume from a source container with other containers in the same task.
-
To expose a path and its contents from a Dockerfile to one or more containers.
Considerations when using bind mounts
When using bind mounts, the following should be considered.
-
For tasks hosted on AWS Fargate using platform version
1.4.0
or later (Linux) or1.0.0
or later (Windows), by default they receive a minimum of 20 GiB of ephemeral storage for bind mounts. The total amount of ephemeral storage can be increased to a maximum of 200 GiB by specifying theephemeralStorage
object in your task definition. -
To expose files from a Dockerfile to a data volume when a task is run, the Amazon ECS data plane looks for a
VOLUME
directive. If the absolute path specified in theVOLUME
directive is the same as thecontainerPath
specified in the task definition, the data in theVOLUME
directive path is copied to the data volume. In the following Dockerfile example, a file namedexamplefile
in the/var/log/exported
directory is written to the host and then mounted inside the container.FROM public.ecr.aws/amazonlinux/amazonlinux:latest RUN mkdir -p
/var/log/exported
RUN touch/var/log/exported/examplefile
VOLUME ["/var/log/exported
"]By default, the volume permissions are set to
0755
and the owner asroot
. These permissions can be customized in the Dockerfile. The following example defines the owner of the directory asnode
.FROM public.ecr.aws/amazonlinux/amazonlinux:latest RUN yum install -y shadow-utils && yum clean all RUN useradd
node
RUN mkdir -p /var/log/exported && chownnode
:node
/var/log/exported RUN touch /var/log/exported/examplefile USERnode
VOLUME ["/var/log/exported"] -
For tasks hosted on Amazon EC2 instances, when a
host
andsourcePath
value are not specified, the Docker daemon manages the bind mount for you. When no containers reference this bind mount, the Amazon ECS container agent task cleanup service eventually deletes it (by default, this happens 3 hours after the container exits, but you can configure this duration with theECS_ENGINE_TASK_CLEANUP_WAIT_DURATION
agent variable). For more information, see Amazon ECS container agent configuration. If you need this data to persist beyond the lifecycle of the container, specify asourcePath
value for the bind mount.
Specifying a bind mount in your task definition
For Amazon ECS tasks hosted on either
Fargate or Amazon EC2 instances, the following
task definition JSON snippet shows the syntax for the volumes
,
mountPoints
, and ephemeralStorage
objects for a
task definition.
{ "family": "", ... "containerDefinitions" : [ { "mountPoints" : [ { "containerPath" : "
/path/to/mount_volume
", "sourceVolume" : "string
" } ], "name" : "string
" } ], ... "volumes" : [ { "name" : "string
" } ], "ephemeralStorage": { "sizeInGiB":integer
} }
For Amazon ECS tasks hosted on Amazon EC2 instances, you can use the
optional host
parameter and a sourcePath
when
specifying the task volume details, which when specified ties the bind mount to
the lifecycle of the task rather than the container.
"volumes" : [ { "host" : { "sourcePath" : "
string
" }, "name" : "string
" } ]
The following describes each task definition parameter in more detail.
name
-
Type: String
Required: No
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed. This name is referenced in the
sourceVolume
parameter of container definitionmountPoints
. host
-
Required: No
This parameter is specified when using bind mounts. To use Docker volumes, specify a
dockerVolumeConfiguration
instead. The contents of thehost
parameter determine whether your bind mount data volume persists on the host container instance and where it is stored. If thehost
parameter is empty, then the Docker daemon assigns a host path for your data volume, but the data is not guaranteed to persist after the containers associated with it stop running.Bind mount host volumes are supported when using either the EC2 or Fargate launch types.
Windows containers can mount whole directories on the same drive as
$env:ProgramData
.sourcePath
-
Type: String
Required: No
When the
host
parameter is used, specify asourcePath
to declare the path on the host container instance that is presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If thehost
parameter contains asourcePath
file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If thesourcePath
value does not exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.
mountPoints
-
Type: Object Array
Required: No
The mount points for data volumes in your container.
This parameter maps to
Volumes
in the Create a containersection of the Docker Remote API and the --volume
option to docker run. Windows containers can mount whole directories on the same drive as
$env:ProgramData
. Windows containers cannot mount directories on a different drive, and mount point cannot be across drives.sourceVolume
-
Type: String
Required: Yes, when
mountPoints
are usedThe name of the volume to mount.
containerPath
-
Type: String
Required: Yes, when
mountPoints
are usedThe path on the container to mount the volume at.
readOnly
-
Type: Boolean
Required: No
If this value is
true
, the container has read-only access to the volume. If this value isfalse
, then the container can write to the volume. The default value isfalse
.
ephemeralStorage
-
Type: Object
Required: No
The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on AWS Fargate using platform version
1.4.0
or later (Linux) or1.0.0
or later (Windows).You can use the Copilot CLI, CloudFormation, the AWS SDK or the CLI to specify ephemeral storage for a bind mount.
Bind mount examples
The following examples cover the most common use cases for using a bind mount for your containers.
To allocate an increased amount of ephemeral storage space for a Fargate task
For Amazon ECS tasks hosted on Fargate using platform version
1.4.0
or later (Linux) or 1.0.0
(Windows), you
can allocate more than the default amount of ephemeral storage for the
containers in your task to use. This example can be incorporated into the
other examples to allocate more ephemeral storage for your Fargate
tasks.
-
In the task definition, define an
ephemeralStorage
object. ThesizeInGiB
must be an integer between the values of21
and200
and is expressed in GiB."ephemeralStorage": { "sizeInGiB":
integer
}
To provide an empty data volume for one or more containers
In some cases, you want to provide the containers in a task some scratch space. For example, you may have two database containers that need to access the same scratch file storage location during a task. This can be achieved using a bind mount.
-
In the task definition
volumes
section, define a bind mount with the namedatabase_scratch
."volumes": [ { "name": "
database_scratch
", } ] -
In the
containerDefinitions
section, create the database container definitions so that they mount the volume."containerDefinitions": [ { "name": "
database1
", "image": "my-repo/database
", "cpu": 100, "memory": 100, "essential": true, "mountPoints": [ { "sourceVolume": "database_scratch
", "containerPath": "/var/scratch
" } ] }, { "name": "database2
", "image": "my-repo/database
", "cpu": 100, "memory": 100, "essential": true, "mountPoints": [ { "sourceVolume": "database_scratch
", "containerPath": "/var/scratch
" } ] } ]
To expose a path and its contents in a Dockerfile to a container
In this example, you have a Dockerfile that writes data that you want to mount inside a container. This example works for tasks hosted on Fargate or Amazon EC2 instances.
-
Create a Dockerfile. The following example uses the public Amazon Linux 2 container image and creates a file named
examplefile
in the/var/log/exported
directory that we want to mount inside the container. TheVOLUME
directive should specify an absolute path.FROM public.ecr.aws/amazonlinux/amazonlinux:latest RUN mkdir -p
/var/log/exported
RUN touch/var/log/exported/examplefile
VOLUME ["/var/log/exported
"]By default, the volume permissions are set to
0755
and the owner asroot
. These permissions can be changed in the Dockerfile. In the following example, the owner of the/var/log/exported
directory is set tonode
.FROM public.ecr.aws/amazonlinux/amazonlinux:latest RUN yum install -y shadow-utils && yum clean all RUN useradd
node
RUN mkdir -p /var/log/exported && chownnode
:node
/var/log/exported RUN touch /var/log/exported/examplefile USERnode
VOLUME ["/var/log/exported"] -
In the task definition
volumes
section, define a volume with the nameapplication_logs
."volumes": [ { "name": "
application_logs
", } ] -
In the
containerDefinitions
section, create the application container definitions so they mount the storage. ThecontainerPath
value must match the absolute path specified in theVOLUME
directive from the Dockerfile."containerDefinitions": [ { "name": "
application1
", "image": "my-repo/application
", "cpu": 100, "memory": 100, "essential": true, "mountPoints": [ { "sourceVolume": "application_logs
", "containerPath": "/var/log/exported
" } ] }, { "name": "application2
", "image": "my-repo/application
", "cpu": 100, "memory": 100, "essential": true, "mountPoints": [ { "sourceVolume": "application_logs
", "containerPath": "/var/log/exported
" } ] } ]
To provide an empty data volume for a container that is tied to the lifecycle of the host Amazon EC2 instance
For tasks hosted on Amazon EC2 instances, you can use bind mounts and have the
data tied to the lifecycle of the host Amazon EC2 instance rather than the
containers referencing the volume. This is done by using the
host
parameter and specifying a sourcePath
value. Any files that exist at the sourcePath
are presented to
the containers at the containerPath
value, and any files that
are written to the containerPath
value are written to the
sourcePath
value on the host Amazon EC2 instance.
Amazon ECS doesn't sync your storage across Amazon EC2 instances. Tasks that use persistent storage can be placed on any Amazon EC2 instance in your cluster that has available capacity. If your tasks require persistent storage after stopping and restarting, you should always specify the same Amazon EC2 instance at task launch time with the AWS CLI start-task command. You can also use Amazon EFS volumes for persistent storage. For more information, see Amazon EFS volumes.
-
In the task definition
volumes
section, define a bind mount withname
andsourcePath
values. In the following example, the host Amazon EC2 instance contains data at/ecs/webdata
that you want to mount inside the container."volumes": [ { "name": "
webdata
", "host": { "sourcePath": "/ecs/webdata
" } } ] -
In the
containerDefinitions
section, define a container with amountPoints
value that references the name of the bind mount and thecontainerPath
value to mount the bind mount at on the container."containerDefinitions": [ { "name": "web", "image": "nginx", "cpu": 99, "memory": 100, "portMappings": [ { "containerPort": 80, "hostPort": 80 } ], "essential": true, "mountPoints": [ { "sourceVolume": "
webdata
", "containerPath": "/usr/share/nginx/html
" } ] } ]
To mount a defined volume on multiple containers at different locations
You can define a data volume in a task definition and mount that volume at
different locations on different containers. For example, your host
container has a website data folder at /data/webroot
,
and you may want to mount that data volume as read-only on two different web
servers that have different document roots.
-
In the task definition
volumes
section, define a data volume with the namewebroot
and the source path/data/webroot
."volumes": [ { "name": "
webroot
", "host": { "sourcePath": "/data/webroot
" } } ] -
In the
containerDefinitions
section, define a container for each web server withmountPoints
values that associate thewebroot
volume with thecontainerPath
value pointing to the document root for that container."containerDefinitions": [ { "name": "
web-server-1
", "image": "my-repo/ubuntu-apache
", "cpu": 100, "memory": 100, "portMappings": [ { "containerPort": 80, "hostPort": 80 } ], "essential": true, "mountPoints": [ { "sourceVolume": "webroot
", "containerPath": "/var/www/html
", "readOnly": true } ] }, { "name": "web-server-2
", "image": "my-repo/sles11-apache
", "cpu": 100, "memory": 100, "portMappings": [ { "containerPort": 8080, "hostPort": 8080 } ], "essential": true, "mountPoints": [ { "sourceVolume": "webroot
", "containerPath": "/srv/www/htdocs
", "readOnly": true } ] } ]
To mount volumes from another container using
volumesFrom
For tasks hosted on Amazon EC2 instances, you can define one or more volumes on a container,
and then use the volumesFrom
parameter in a different container
definition (within the same task) to mount all of the volumes from the
sourceContainer
at their originally defined mount points.
The volumesFrom
parameter applies to volumes defined in the
task definition, and those that are built into the image with a
Dockerfile.
-
(Optional) To share a volume that is built into an image, you need to build the image with the volume declared in a
VOLUME
instruction. The following example Dockerfile uses anhttpd
image and then adds a volume and mounts it atdockerfile_volume
in the Apache document root (which is the folder used by thehttpd
web server):FROM httpd VOLUME ["/usr/local/apache2/htdocs/dockerfile_volume"]
You can build an image with this Dockerfile and push it to a repository, such as Docker Hub, and use it in your task definition. The example
my-repo/httpd_dockerfile_volume
image used in the following steps was built with the above Dockerfile. -
Create a task definition that defines your other volumes and mount points for the containers. In this example
volumes
section, you create an empty volume calledempty
, which the Docker daemon manages. There is also a host volume defined calledhost_etc
, which exports the/etc
folder on the host container instance.{ "family": "test-volumes-from", "volumes": [ { "name": "empty", "host": {} }, { "name": "host_etc", "host": { "sourcePath": "/etc" } } ],
In the container definitions section, create a container that mounts the volumes defined earlier. In this example, the
web
container (which uses the image built with a volume in the Dockerfile) mounts theempty
andhost_etc
volumes."containerDefinitions": [ { "name": "web", "image": "
my-repo/httpd_dockerfile_volume
", "cpu": 100, "memory": 500, "portMappings": [ { "containerPort": 80, "hostPort": 80 } ], "mountPoints": [ { "sourceVolume": "empty", "containerPath": "/usr/local/apache2/htdocs/empty_volume" }, { "sourceVolume": "host_etc", "containerPath": "/usr/local/apache2/htdocs/host_etc" } ], "essential": true },Create another container that uses
volumesFrom
to mount all of the volumes that are associated with theweb
container. All of the volumes on theweb
container are likewise mounted on thebusybox
container (including the volume specified in the Dockerfile that was used to build themy-repo/httpd_dockerfile_volume
image).{ "name": "busybox", "image": "busybox", "volumesFrom": [ { "sourceContainer": "web" } ], "cpu": 100, "memory": 500, "entryPoint": [ "sh", "-c" ], "command": [ "echo $(date) > /usr/local/apache2/htdocs/empty_volume/date && echo $(date) > /usr/local/apache2/htdocs/host_etc/date && echo $(date) > /usr/local/apache2/htdocs/dockerfile_volume/date" ], "essential": false } ] }
When this task is run, the two containers mount the volumes, and the
command
in thebusybox
container writes the date and time to a file calleddate
in each of the volume folders. The folders are then visible at the website displayed by theweb
container.Note Because the
busybox
container runs a quick command and then exits, it must be set as"essential": false
in the container definition. Otherwise, it stops the entire task when it exits.