0

We're running a multi container Docker setup on AWS Elastic beanstalk. We're in the process of load testing, and noticed that no matter what the load was, CPU% maxed out at 50%. The instance size has 2 cores, so it should be able to use both.

Additional info:

  • The main docker container runs Ruby on Rails, and this is what gets taxed under load testing. The other containers are rarely used.
  • I've double checked our TaskDefinition for ECS. This is set to allow a max of 2 cores for any container.
  • I've SSH'd into the EC2 instance and monitored the container. The ruby process maxes out at 100% (1 core). I also see docker in there, but it runs at a much smaller %.

My question is: How can we remove the CPU cap on this setup? Since it doesn't seem likely that ECS is the culprit, what is?

Grey Vugrin
  • 101
  • 2
  • 1
    I'm not sure what you're confused about. Just because you've assigned two cores to your container doesn't mean that both will be magically used. Your application design needs to account for it. – EEAA May 22 '17 at 20:38
  • What do you mean by the last part? I would have thought that my rails container would expand to the available cores, unless something is limiting it. – Grey Vugrin May 22 '17 at 20:43
  • 1
    Only if it's designed to be multi-threaded. – EEAA May 22 '17 at 20:44
  • Ah I see. I assumed it was an AWS issue, when Puma (the rails server) might not be set up for it. I'll check that out first. Thanks – Grey Vugrin May 22 '17 at 20:47

1 Answers1

0

The comments pointed me in the right direction.

I ended up needing to use the -w (workers) flag when starting up my puma rails server, in the Dockerrunaws.json file.

Grey Vugrin
  • 101
  • 2