39

What kind of approach is recommended for updating the container of a service which is running in Amazon ECS?

The AWS documentation says: "If you have updated the Docker image of your application, you can create a new task definition with that image and deploy it to your service, one task at a time." This is pretty much everything that is currently available in the documentation currently (13th April 2015).

Did I understand correctly, that the only way to update my application container in Amazon ECS is to create a new task, then stop the old task and start the new task?

I have been successfully using a tag "latest" with Core OS & Fleetctl. This has the benefit of not needing to change the Docker image's tag for new updates, since reloading the service will see new changes and update the container (using the same tag "latest").

What kind of approaches you have used for updating your service with updated docker image in Amazon ECS?

ceejayoz
  • 32,469
  • 7
  • 81
  • 105
Petrus Repo
  • 492
  • 1
  • 4
  • 7
  • Also trying to figure this out as well, as we're hoping to use ECS for deploying a variety of daemons that need to run continuously in production. – parent5446 Jul 20 '15 at 14:46
  • 1
    Just to confirm, you said that restarting an ecs service will pull down the latest version of an image? I have been looking for documentation about this and can't find it anywhere. – mmilleruva Jul 28 '15 at 15:21
  • 1
    Any confirmation on this one? – Lior Ohana Jan 04 '16 at 18:51
  • @LiorOhana Sadly it's true. See my answer for details. – hamx0r Nov 23 '16 at 16:49
  • I posted a new detailed answer below, but to clarify here: Your service will always attempt to pull a fresh copy of your container from the repo, based on the tag you've set. If a task is killed, when the service deploys it again, it has no recollection of what _was_ in the repo, only what _is_ in the repo. – MrDuk Mar 23 '18 at 16:16
  • From @foreveryoung's answer below, the solution is in https://github.com/silinternational/ecs-deploy. Have a look at that repo, you will see a more robust solution than those posted here so far. Using `latest` is asking for trouble. That said, if you must use `latest`, you only need to run `aws ecs update-service --force-new-deployment` to force a pull from the repo. ref https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html. – Mike D Apr 03 '18 at 03:48

7 Answers7

21

Not sure if this is considered as abandoned question - stumbled upon this while troubleshooting my issue and now adding my solution now that it's resolved.

To update service with new container, you need to:

  1. upload new container to repository;
  2. trigger task definition update;
  3. trigger container update;
  4. important: make sure service rules allow launching new version of the task.

If service task is not updated to latest version, check "events" tab for errors. For example, maybe ECS was not able to start new version of your service: you only have one ec2 instance in the cluster and the application port is already used on the host. In this case, set "min health/max health" limits to "0%, 100%" - this way, ECS will choose to kill old container before deploying new one. This is also happening over a course of few minutes - don't rush if you don't see immediate feedback.

Below is an example deployment script to update container in a pre-configured cluster and service. Note there is no need to specify versions if you just mean "use latest from the family".

awsRegion=us-east-1
containerName=..
containerRepository=..
taskDefinitionFile=...
taskDefinitionName=...
serviceName=...


echo 'build docker image...'
docker build -t $containerName .

echo 'upload docker image...'
docker tag $containerName:latest $containerRepository:$containerName
docker push $containerRepository:$containerName

echo 'update task definition...'
aws ecs register-task-definition --cli-input-json file://$taskDefinitionFile --region $awsRegion > /dev/null

echo 'update our service with that last task..'
aws ecs update-service --service $serviceName --task-definition $taskDefinitionName --region $awsRegion  > /dev/null
uiron
  • 326
  • 2
  • 2
  • 2
    This forces me to have a task definition as a file locally, and if I understand correctly, that is the only place where I can define environment variables. Is there any way to do this without having the environment variables locally? Ideally I'd like to issue a command to point to a new docker image tag without sending any other information about the task/service/container/etc. – rmac Aug 02 '16 at 13:10
  • 1
    The comments on `set "min health/max health" limits to "0%, 100%"` is golden. Thank you so much! – sivabudh Mar 01 '17 at 09:44
  • 1
    Word of caution here, if you set your `min` to `0%`, when you change the task definition your service deploys, you're essentially giving it full authority to bring down _all_ tasks at the same time for that deployment. – MrDuk Mar 23 '18 at 16:06
8

To update your application, update the task definition and then update the service. See http://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html

Chris Barclay
  • 81
  • 1
  • 2
4

I use some part from ecs-deploy script with my improvements (it takes images from every container description, and replaces its tag part with $TAG_PURE): https://gist.github.com/Forever-Young/e939d9cc41bc7a105cdcf8cd7ab9d714

# based on ecs-deploy script
TASK_DEFINITION_NAME=$(aws ecs describe-services --services $SERVICE --cluster $CLUSTER | jq -r .services[0].taskDefinition)
TASK_DEFINITION=$(aws ecs describe-task-definition --task-def "$TASK_DEFINITION_NAME" | jq '.taskDefinition')
NEW_CONTAINER_DEFINITIONS=$(echo "$TASK_DEFINITION" | jq --arg NEW_TAG $TAG_PURE 'def replace_tag: if . | test("[a-zA-Z0-9.]+/[a-zA-Z0-9]+:[a-zA-Z0-9]+") then sub("(?<s>[a-zA-Z0-9.]+/[a-zA-Z0-9]+:)[a-zA-Z0-9]+"; "\(.s)" + $NEW_TAG) else . end ; .containerDefinitions | [.[] | .+{image: .image | replace_tag}]')
TASK_DEFINITION=$(echo "$TASK_DEFINITION" | jq ".+{containerDefinitions: $NEW_CONTAINER_DEFINITIONS}")
# Default JQ filter for new task definition
NEW_DEF_JQ_FILTER="family: .family, volumes: .volumes, containerDefinitions: .containerDefinitions"
# Some options in task definition should only be included in new definition if present in
# current definition. If found in current definition, append to JQ filter.
CONDITIONAL_OPTIONS=(networkMode taskRoleArn)
for i in "${CONDITIONAL_OPTIONS[@]}"; do
  re=".*${i}.*"
  if [[ "$TASK_DEFINITION" =~ $re ]]; then
    NEW_DEF_JQ_FILTER="${NEW_DEF_JQ_FILTER}, ${i}: .${i}"
  fi
done

# Build new DEF with jq filter
NEW_DEF=$(echo $TASK_DEFINITION | jq "{${NEW_DEF_JQ_FILTER}}")
NEW_TASKDEF=`aws ecs register-task-definition --cli-input-json "$NEW_DEF" | jq -r .taskDefinition.taskDefinitionArn`

echo "New task definition registered, $NEW_TASKDEF"

aws ecs update-service --cluster $CLUSTER --service $SERVICE --task-definition "$NEW_TASKDEF" > /dev/null

echo "Service updated"
3

I know this is an old thread, but the solution is much easier than most of the answers here make it out to be.

How to update the running container in two steps:

The below assumes you have a service running a task which is referencing a container tagged latest (or any other static tag which doesn't change accross container updates).

  1. Upload your new container to the repository
  2. Manually kill your tasks

If the goal is for us to get a new build out into the wild, we don't really need to rely on our service for that (and I'd argue, we shouldn't rely on it). If you kill your task, the service will recognize it doesn't have the Desired Count of tasks running, and simply spin up a new one. This will trigger a re-pull of your container, based on the same tag.

ECS services are HA security net, not a replacement for your CD/CI pipeline.


Bonus: If the goal is to have a service recognize a new container has been pushed (regardless of tags), we need to consider the implications of that. Do we really want a basic service controlling our deployment pipeline for us? Likely not. Ideally, you'll push your containers with different tags (based on release versions or something). In this case, the barrier to deployment is that the service has to be notified of something new -- again, it's a safety net for the service, and nothing more.


How to deploy new tags in three steps:

  1. Upload your new container:tag to the repository
  2. Create a new task definition referencing the new tag
  3. Update your service to reference the new task definition
    • Careful here! If you have minimum healthy set to 0% as some other answers suggest, you're giving AWS full authority to kill your entire service in order to deploy the new task definition. If you prefer a rolling / gradual deployment, set your minimum to something >0%.
    • Alternatively, set your minimum healthy to 100% and your maximum healthy to something >100% to allow your service to deploy the new tasks before killing off the old ones (minimizing the impact to your users).

From this point, your service will automatically recognize you have specified a new task, and work on deploying that out based on the minimum/maximum healthy thresholds you've configured.

MrDuk
  • 815
  • 1
  • 10
  • 18
3

You can use --force-new-deployment option on ecs update-service api call. It is not needed to have update for service itself. From docs:

Whether to force a new deployment of the service. Deployments are not forced by default. You can use this option to trigger a new deployment with no service definition changes. For example, you can update a service's tasks to use a newer Docker image with the same image/tag combination (my_image:latest ) or to roll Fargate tasks onto a newer platform version.

This is as simple as that with aws-cli:

aws ecs update-service --cluster my-cluster --service my-service --force-new-deployment
1

After uploading a new Docker image, even if it has the same tag as one used by a Task, one must copy the latest task, and then configure the Service to use that new Task. Optionally, one could simply have 2 duplicate tasks and configure the Service to swap between them each time the Docker Image is updated.

Basically, in order to cause a new Docker Container to be made by ECS, an update to the Service has to trigger it, and the only way to make the Service trigger is to Update it in some way - like by telling it to use a different Task number.

Note that existing running Containers may not auto-stop just because the Service was updated - you may need to look at your Tasks list and stop them manually.

hamx0r
  • 151
  • 1
  • 7
  • This isn't actually true - you can always _manually_ kill a task instead of relying on your service to do it. When the service detects it's been killed, it will attempt to bring it up again, forcing a re-pull of the same `tag` – MrDuk Mar 23 '18 at 15:40
1

The approach that works for me is similar to the above. After creating your service and task, and starting everything going, edit the Auto-Scaling Group and ensure min, max and desired are set to 1.

The group may be the default one; if you're not sure then you can get to it by selecting the ECS Instances tab in your cluster, then from the Actions drop-down choose Cluster Resources and click the link near the bottom of the dialog box that opens.

When that's all in place, any time you want to deploy an updated container image go to the Task area of the cluster and Stop the task. You'll get a warning, but provided the auto-scaling is set up the service will start it going again with the latest push.

No need to create new versions of either the service or the task.

Note that the service/task update themselves anywhere from instantly to within a minute or so. If you get desperate waiting, you can just Run New Task manually. The service won't own it, so it's not ideal, but it will still spin up a new one if it dies.

  • By not creating a new task revision, will won't be able to rollback in case something goes wrong. Using tags in docker images is good as you can always refer current service with source code. That's a shame Aws does not provide a good cli to update just image/tag from containers. The only way that's simple is to upload imagedefinitions file to S3 and trigger a codepipeline – Rafael Diego Nicoletti May 21 '22 at 08:41