2

I know this is not an 'original question'. The general topic is covered extensively. Neverthless i'm struggling with my particular setup:

I'm trying to basically convert the following docker-compose file into an ECS based deploy in AWS.

version: '3'
services:
  app:
    build:
      context: .
      dockerfile: ./docker/Dockerfile
    restart: always
    container_name: "my-app"
    volumes:
      - ./src:/app/src
      - ./.env:/app/.env
      - ./store:/app/store
    ports: #HOST:CONTAINER
      - "3000:3000"
      - "4000:22"
    networks:
      - my-network
  my-micorservice:
    build:
      context: .
      dockerfile: docker/Dockerfile.MY.MICROSERVICE
    restart: always
    container_name: "my-microservice"
    ports:
      - "5000:5000"
    networks:
      - my-network
networks:
    bb-network:
      driver: bridge

I'm using AWS ECS, ECR, behind an ALB deployed to EC2

I have one service running in my cluster within which I 'defined' this deployment.
The service has one task definition.
The task has 2 containers.

Container 1 (my-app) is a web server listening on port 3000.
Container 1 (my-app) also has SSHD server listening on port 22.
(I understand now there are better ways to manage SSH in ECS, let's pretend it doesn't matter for this question).

Port mapping is currently 0:3000 in the container definition.

Container 2 (my-microservice) also has a web server running on port 5000

I'm using one 1 target group.

Initially I deployed container 1 successfully and am able to reach it via the load balancer but only on first exposed port (3000 via public 80/443 through the ALB)

Now I'm trying to add container 2 and reach second service on port 22 in container 1.
The task starts successfully, and health checks pass.

However I can still only reach container 1 from outside, and only on the first mapped port (port 3000 via public 80 or public 443).

If i try to define additional port mapping rules in the container 1 configuration, the task will no longer run.

For example if i try and change container 1 to to port mapping definitions:
0:3000
22:22
or
3000:3000
22:22
or
0:3000
0:22

I get:
“was unable to place a task because no container instance met all of its requirements. The closest matching container-instance 7a628412-1ecc-4f8d-8615-672cfd62bb17 is already using a port required by your task."

I’ve temporarily made all all ports in the security group wide open, and set up routing rules in the ALB which forward 80,443,22,5000 all to the target group.

From other reading/logic it seems like maybe I need multiple target groups, but I can’t actually define more than 1 target group when I create the service.
I.e. each load balancer definition accepts only one target group, and each service definition accepts only one load balancer.

Right now, if I try to hit port 5000, this also is directed to container 1, not container 2.

In summary I'm trying to achieve:

  • container 1: public 80, and public 443 —> container 1 3000
  • container 1: public 22 (or if need, other port like 4000) —> container 1 22
  • container 2: public 5000 —> container 2 5000
  • container 2 to container 1: 3000:3000
  • container 1 to container 2: 5000:5000
  • container 1 to container 2: 22:22

note: All of this has been configured via AWS admin GUI so far

I’ve been testing and updating a lot with trial and error, and feel my basic approach/understanding must be flawed.

  1. Do i need separate services for each container?
  2. Do i need one service but separate tasks for each container? (if the latter, WHY am i allowed to create multiple containers in one task??)
  3. Do i need a new ALB for each container?
  4. A new target group for each etc?
  5. or is ALB wrong here, and i need to switch back to classic load balancer?
  6. Lastly should i leave as is and try and create 3rd NGINX container which acts as routing proxy and try to control ingress that way? It seems like that should be the load balancers job but i’m a bit confused at this point!

Sorry for the long post. If i'm missing pertinent setup info, or need to clean up pertinent details. I will do so.

Lastly I've read about the ecs-cli compose tool, but would like to first understand how to do this 'manually' before trying to leverage a more automated tool.

Any feedback or advice welcome here, or pointers to helpful tutorials that might be relavant to this use case. Most of the ones i've found which deal with this, tend to be about more complex VPN topography that's a bit too advanced for me right now. Seems like my use case should be pretty standard/noob friendly.

thanks a lot!

baku
  • 123
  • 4
  • If the answer below helped you solve your problem please upvote and accept it. That's the StackExchange way to thank the users for taking their time answering your questions. Thanks :) – MLu Mar 27 '20 at 02:35
  • thanks for the follow up, i've been swamped, and have not been able to follow these instructions yet. But i have every intention of following SE protocol once i am able to verify. thanks for the reminder – baku Mar 27 '20 at 03:23

1 Answers1

1
  1. You can’t pass SSH through ALB. It doesn’t work because ALB is purely for HTTP / HTTPS traffic, it won’t let SSH through.

    You can use NLB (Network Load Balancer) for SSH if you want. (However SSH’ing to containers is a big NO NO ;)

  2. You can’t mix different services in one Target Group. Create two target groups - one for the port 3000 container and one for the port 5000 container. Then use different ALB paths for each, e.g. /app3000 and /app5000, mapping to the respective TGs. They can both be behind one ALB, just different TGs.

Hope that helps :)

MLu
  • 23,798
  • 5
  • 54
  • 81
  • thanks a lot for taking the time to answer. I will give it anther shot with your advice – baku Mar 14 '20 at 01:22
  • ok, i know this probably seems obvious but " You can’t mix different services in one Target Group." Should i be trying to map each container to a different target group, within the same service? Or you're saying I need a unique service per container. So two target groups, and the 2 services/target groups can still share a single load balancer? – baku Mar 14 '20 at 01:28
  • @baku I would have to test it but I think the easiest way would be two ECS Services and two TGs. Yes they can share the same ALB, however each service/TG will have a separate path in the URL. It *may* be possible to have a single service with two containers each with its own Target Group, I’d have to test that (which I can’t right now). – MLu Mar 14 '20 at 01:33
  • thanks again for your help here, that advice helped me get everything sorted (sorry for delay in accepting your answer) – baku Apr 01 '20 at 14:17