1

I have various Fargate tasks. They work fine. I then have a few additional tasks that require more disc space than silly old Fargate will allow. These have to run on EC2 instances that I have assigned to their respective cluster.

I trigger these tasks using cloudwatch events.

Because I have so few of these EC2 powered tasks, it seems silly having these EC2 instances sitting around.

My thought was to create / destroy the EC2 instances on demand, probably by using Lambda.

My proposed sequence:

  • Cloudwatch Start event
  • Lambda auto scales the EC2 instances for the ECS cluster to 1
  • EC2 Cloudwatch ready event
  • ECS task is started
  • ECS task finishes and triggers a Cloudwatch event
  • Lambda down scales the cluster

Is this feasible? Is there a pattern or better way to do this? Perhaps the Lambda function is unnecessary if there's a way to trigger the autoscale straight from the cloudwatch event?

Please note that these tasks are not scheduled, so it's not a matter of scheduling the autoscaling

Ben Smith
  • 157
  • 5

2 Answers2

1

Use AWS Batch. It is a service specifically designed for your situation, and can be triggered by CloudWatch Events

M. Glatki
  • 1,868
  • 1
  • 16
  • 33
0

That should work fine. You could also have the ECS service desired capacity raised at the same time as the EC2 ASG desired capacity is to remove a step, unless you're launching standalone tasks. The ECS service will just keep trying to launch the task until the instance gets registered to the cluster.

How do you know when a task/instance need to be started, what's the trigger for that? Whatever it is, you could use it to directly trigger the Lambada function. Or if its a CloudWatch metric, then skip the CW Event and Lambda function and have a CW alarm directly increase the ASG and ECS desired capacity (or example, if the initiating factor is something in an SQS queue)

Shahad
  • 326
  • 1
  • 6