26

I'm playing with Amazon ECS (a repackaging of Docker) and I'm finding there's one Docker capability that ECS does not seem to provide. Namely, I would like to have multiple containers running in an instance, and have requests coming in to IP address 1 map to container 1, and requests coming to IP address 2 map to container 2, etc.

In Docker, binding a container to a specific IP address is done via:

docker run -p myHostIPAddr:80:8080 imageName command

However, in Amazon ECS, there doesn't seem to be a way to do this.

I have set up an EC2 instance with multiple Elastic IP addresses. When configuring a container as part of a task definition, it is possible to map host ports to container ports. However, unlike Docker, ECS does not provide a way to specify the host IP address as part of the mapping.

An additional twist is that I would like outbound requests from container N to have container N's external IP address.

Is there a way to do all of the above?

I've looked through the AWS CLI documentation, as well as the AWS SDK for Java. I can see that the CLI can return a networkBindings array containing elements like this:

{
  "bindIP": "0.0.0.0", 
  "containerPort": 8021, 
  "hostPort": 8021
},

and the Java SDK has a class named NetworkBinding that represents the same information. However, this info appears to be output-only, in response to a request. I can't find a way of providing this binding info to ECS.

The reason that I want to do this is that I want to set up completely different VMs for different constituencies, using different containers potentially on the same EC2 instance. Each VM would have its own web server (including distinct SSL certificates), as well as its own FTP and SSH service.

Thanks.

ceejayoz
  • 32,469
  • 7
  • 81
  • 105
Mark R
  • 401
  • 1
  • 4
  • 9
  • I'm having the same issue with our workflow. `aws ecs describe-container-instances` doesn't seem to help. They seem to really want to push you to use an ELB, which for our case is kind of dumb. – four43 Jul 14 '15 at 20:31
  • There seems to be one way to do it now (Q4 2017): https://stackoverflow.com/a/46577872/6309 – VonC Oct 05 '17 at 04:33

3 Answers3

5

Here's an actual, logical way to do it. It sounds too complicated but you can actually implement it in a matter of minutes, and it works. I'm actually implementing it as we speak.

You create a task for each container, and you create a service for each task, coupled with a target group for each service. And then you create just 1 Elastic Load Balancer.

Application-based elastic load balancers can route requests based on the requested path. Using the target groups, you can route requests coming to elb-domain.com/1 to container 1, elb-domain.com/2 to container 2, etc.

Now you are only one step away. Create a reverse proxy server.

In my case we're using nginx, so you can create an nginx server with as many IPs as you'd like, and using nginx's reverse proxying capability you can route your IPs to your ELB's paths, which accordingly route them to the correct container(s). Here's an example if you're using domains.

server {
    server_name domain1.com;
    listen 80;
    access_log /var/log/nginx/access.log vhost;
    location / {
        proxy_pass http://elb-domain.com/1;
    }
}

Of course, if you're actually listening to IPs you can omit the server_name line and just listen to corresponding interfaces.

This is actually better than assigning a static IP per container because it allows you to have clusters of docker machines where requests are balanced over that cluster for each of your "IPs". Recreating a machine doesn't affect the static IP and you don't have to redo much configuration.

Although this doesn't fully answer your question because it won't allow you to use FTP and SSH, I'd argue that you should never use Docker to do that, and you should use cloud servers instead. If you're using Docker, then instead of updating the server using FTP or SSH, you should update the container itself. However, for HTTP and HTTPS, this method works perfectly.

TheNavigat
  • 163
  • 1
  • 9
4

One option: Create an ELB for each client, and then assign certain containers to each ELB.

[1] http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html

Adam Keck
  • 41
  • 2
  • 14
    Ca-ching! 18 bucks a month for one ELB. Now, who want microservices with ECS? https://aws.amazon.com/elasticloadbalancing/pricing/ – Knots Jun 28 '16 at 19:56
  • 1
    @Knots we had the same problem. Then we switched to Lambda + API Gateway and our cost went down to 10 cents. – grepe Apr 25 '17 at 15:47
  • You can now use a single ALB (instead of classic ELBs) for all of your services rather than 1 per service. They need to either be on different hostnames or different paths on a hostname. – A.J. Brown Jun 11 '18 at 14:44
1

You can't to the container itself, but you can make an EC2 instance dedicated to a specific container. Then, where you need to access that service, you can reference the EC2 host running the container.

  • Create a dedicated cluster for your services with this requirement
  • Create an AMI-Optimized EC2 Instance using your preferred instance type
    • Be sure to assign that instance to the above cluster using the UserData option as described in that guide.
  • Create a TaskDefinition with NetworkMode set to "bridge" (same as your desktop)
  • Create a Service Definition with:
    • LaunchType set to EC2
    • Cluster set to the cluster you created above
    • Task definition set to the task definition you created above
  • Assign any security groups to the EC2 instance as you would otherwise.

Though you're still talking directly to an EC2 instance, you can control the IP of the container (indirectly) as you would the EC2 instance. This saves you the headache of running the services on the "bare metal" allowing you to easier manage and configure the service and the configuration therein.