4

As per the service utilization documentation it is possible to have a Memory utilizations over 100% when using soft limits in the ECS tasks (because you don't want to kill your app with hard limits). For CPU utilization this is always true. We have for example a micro-service with a soft limit of 500MB and with two instances of this task it reports a memory usage of 124%. Launching a 3rd instance makes the usage drop to 103%.

Docker stats confirms these figures:

CONTAINER           CPU %               MEM USAGE / LIMIT       MEM % 
6f5cd837b3d7        0.64%               670.1 MiB / 3.862 GiB   16.94%
817c573afac7        9.66%               590.3 MiB / 3.862 GiB   14.93%
7a50d1be6e6e        20.34%              427.1 MiB / 3.862 GiB   10.80%

It does however make it very hard to auto-scale. For example, a cpu utilization of 400% might mean that there's enough cpu to spare and I don't need a new instance whereas a cpu utilization of 100% might mean that all tasks are using their reserved CPU units and I might need a new instances (either a new EC2 instances or a new task instance).

Does this even make sense ? What would be a logical rule on ECS to do auto-scaling based on cpu and memory metrics when using only a soft limit ?

P_W999
  • 281
  • 1
  • 9
  • Your question is kind of confusing - you mention CPU and Memory in the same description, not entirely making it clear which metric you're referring to in your example. Can you clarify a bit more? – MrDuk Jan 29 '18 at 18:29
  • Both metrics are important. On high load, the CPU and / or the memory usage might rise beyond it's limit and a new EC2 instance and / or container instance would be required to handle the high load. – P_W999 Feb 27 '18 at 14:38

0 Answers0