5

When an elastic load balancer (ELB) is associated with an auto-scaling group, it is possible to specify a grace period during which new EC2 instances will not be terminated even if they are marked as unhealthy by the ELB. Is it possible to specify a similar grace period, during which new ECS tasks will not be killed and restarted by their associated ECS service, even if the ECS instance on which a task is running has been marked unhealthy by the ELB?

Update:

In our current use case, the docker container being run as an ECS task contains a JBoss instance that loads a number of caches on startup. These caches can take several minutes to load. However, the ECS service registers the container instance with the ELB, as soon as the container has started. This means that traffic can be routed to the new container before it is ready to accept it. We could increase the health check interval and the "healthy/unhealthy thresholds" on the ELB to prevent the ELB from routing traffic to the instance and the ECS service from restarting the container until the caches have been loaded. However, increasing the health check interval and thresholds is not desirable, because if an instance is marked as unhealthy after the caches have been loaded, the ECS service should restart the container as soon as possible (which necessitates a shorter health check interval and smaller thresholds).

Thus, is it possible to apply a grace period during which traffic will not be routed to a new container by the ELB and the ECS service will not restart the container (even if it fails the health checks)? Or failing that, are there any suggestions regarding a solution for our use case?

1 Answers1

2

After a discussion with the support team, it turns out that ECS cannot support our current use case.

There is a workaround that solves one of the issues we are facing. That workaround is to create a separate, essential, health-check container and in the same ECS task as the actual application container. The purpose of the health-check container is to monitor the application container to determine when the application has been started completely. If it detects that the application has failed to start, it exits, causing the ECS service to cycle the task. The ELB is then configured to perform its health checks against the health-check container, which will always report that it is up via the relevant port. This workaround will prevent the ECS service from cycling the ECS task due to failed health checks.

However, the ELB will begin routing traffic to the application container immediately. It will do so, even if the application container is not yet ready to receive traffic (for example, because it is still waiting for a cache to load). Currently, there is no way to delay the ELB from sending traffic to the application container, as the ECS service provides no support a grace period. We have managed to workaround this issue by providing messages to our application containers via SQS and only having them pull from the queue when their caches are fully loaded. However, we have future use cases (such as serving web requests) where this is not a feasible option. To this end, I intend to raise a feature request for the grace period.

As an aside, both Kubernetes (http://kubernetes.io/v1.0/docs/user-guide/walkthrough/k8s201.html#application-health-checking) and Marathon (https://mesosphere.github.io/marathon/docs/health-checks.html) already support this option for health checking, if someone reading this is happy not to use a managed service.

  • 2
    I have also opened a thread for this question of the AWS forums: https://forums.aws.amazon.com/thread.jspa?threadID=215740&tstart=0 – iBlocksShaun Sep 11 '15 at 10:02
  • Thanks for the detailed info. Doesn't this imply that you can't use the combination of ECS and ELB with any Docker container that takes more than ~30 seconds to start (or whatever timeout you have for health_check_interval * health_check_unhealthy_threshold)? If so, that seems like a severe limitation. – Yevgeniy Brikman Dec 10 '15 at 20:30
  • I was wondering if you found a low impact solution. Running kube is not in the works for me right now. – EightyEight Aug 15 '17 at 19:21
  • Unfortunately, no. We have switched to using Kubernetes clusters on AWS. These are created with kops (https://github.com/kubernetes/kops), the official tool for setting up K8S on AWS. It can also output Terraform config files and more recently CloudFormation. The community around K8S is vibrant and growing rapidly. Even AWS joined the Cloud Native Computing Foundation (CNCF) recently. This may or may not suggest that a managed K8S offering is on the way for AWS. Maybe at ReInvent 2017 in November (purely speculation of course). – iBlocksShaun Sep 08 '17 at 12:06