0

I have an instance with multiple tomcat applications and standalone jars running. If the machine has 2vcpu and 8GB RAM the individual applications can use use the resources on demand(based on the Xms and Xmx values set for tomcat and for the individual jars). ECS is not in the picture at this point.

Now I'll be moving the application to containers on EC2 instances(not fargate). Is it possible to have task definitions where I specify the cpu and memory which sums up to be greater than the actual cpu or RAM of the EC2 host?

Because I don't expect all applications to to be utilizing 100% of the memory allocated to it during creation of the task definition. Would it work to have an ECS host with 4vcpus and I place 10 tasks on it all with 4vcpus specified in task definition? I know the tasks won't utilize 4vcpu but I want that if any of the task needs to use, they shouldn't be restricted to use full capacity of the host

I'm aware ECS has scaling capabilities which I plan to use. But I'm aiming to ensure that I don't over provision the number of EC2 hosts I'm using for ECS

Kohini
  • 113
  • 3

1 Answers1

0

The short answer is that you can achieve it but with a nuance. If you configure the size of the task definition you won't be able to over-commit cpu/memory resources. However, if you are deploying to EC2 you can omit the task size in the task definition and it is assumed that all tasks would be able to access all host capacity. You can then fine tune cpu/memory resources (guarantee/ceiling etc) at the individual containers level. This will allow you to over-commit.

The longer answer is here.

mreferre
  • 426
  • 1
  • 5
  • Would I have to set Xms and Xmx values within the tomcat image? I imagine if I don't set in the image then even though ec2 host resources are available to the container, java's default params will come into play to determine how much memory the application deployed within tomcat can use. Thanks! – Kohini Jun 27 '21 at 15:36
  • This depends on the specific Java version being used. Earlier version of Java won't care about containers/cgroups and they would just assume that ALL host resources they see (because they can always see ALL resources) is available to them. Later versions of Java are clever enough to realize they live within cgroups virtual boundaries and are aware of containers construct (so they know how much it was given to them). IMO setting these parameters inside the app is never a bad idea, regardless. PS there is a note in the blog on this. – mreferre Jun 28 '21 at 07:46
  • There is a line in the blog `if you only set the hard limit, that represents both the reservation and the ceiling` which seems to not make sense. If I only set the hard limit of 2GB(but no soft limit) why would it also be the reserved value for the container? I would imagine in the absence of soft limit there is no reserved memory, just the memory that container needs is taken from the host and the cap of 2GB is the hard limit – Kohini Jul 23 '21 at 12:39
  • It is how it's been implemented. I believe it is due to the fact that when you don't set a task size, the system needs to know the amount of reserved memory and, if it's not set, the limit is taken as the reservation. If you want the limit to be different then the reservation you need to explicitly set both. – mreferre Jul 23 '21 at 15:25