1

I have a new application that I've created via a docker-compose file. This file contains 2 networks:

version: '2.1'

# ----------------------------------
# Services
# ----------------------------------
services:
  application:
    image: tianon/true
    volumes:
      - ${APPLICATION_PATH}:/var/www

  nginx:
    build:
      context: ./docker/nginx
    volumes_from: 
      - application
    volumes:
      - ${DOCKER_STORAGE}/nginx-logs:/var/log/nginx
      - ${NGINX_SITES_PATH}:/etc/nginx/sites-available
    ports:
      - "${NGINX_HTTP_PORT}:80"
      - "${NGINX_HTTPS_PORT}:443"
    networks:
      - frontend
      - backend

  redis:
    build:
      context: ./docker/redis
    volumes: 
      - ${DOCKER_STORAGE}/redis:/data
    ports:
      - "${REDIS_PORT}:6379"
    networks: 
      - backend

# ----------------------------------
# Networks
# ----------------------------------
networks:
  frontend:
    driver: "bridge"
  backend:
    driver: "bridge"

# ----------------------------------
# Volumes
# ----------------------------------
volumes:
  redis:
    driver: "local"

You'll notice I have 2 networks here, frontend and backend. The question is simple:

  • How do I expose frontend to the world, but allow backend services to communicate with each other without exposing them to the world? (I'm assuming using iptables, or specifying in docker which network I want to expose to the host)
  • This is a simple 1 server setup (Digital Ocean), running Ubuntu 18.04 LTS
  • In the example above, I should be able to access nginx from the outside world on port 80 or 443, but NOT redis. Nginx container should be able to access redis service 6379 internally.
  • To be more clear: My intent right now is for the backend services to communicate with each other for now, so removing EXPOSE would work now, but ultimately, I want to partition the 2 networks. The backend services, I want to restrict IP ranges to (Allow some other non docker networks to communicate), and the frontend services I want to open exposed ports to the world.

For the bounty: There's clearly a way in docker to define multiple overlay networks. I had assumed, it would be simple to just expose one network to the outside world. I'm trying to find an easy way in docker or iptables to do that. The current answer gives me some direction, but requires manual rules for each port. I'd like a proper answer on how to secure my "frontend" facing network, without having to specify specific ports in iptables.

Blue
  • 119
  • 1
  • 13

4 Answers4

4

To complete the answer of @BMitch for Your demand..

You can do this with an additional inter-container network (or mutiple if You see the need or usecase for that..), because for such setup to work reliably it is essential that each container only has one non-internal network assigned. Otherwise You cannot be really certain which bridge network will be chosen for the incoming traffic on mapped port (or at least it's not really configurable).

Example compose-file that shows such setup:

version: '3.6'
services:
  c1:
    container_name: c1
    image: "centos:latest"
    entrypoint: /bin/bash -c "while sleep 10; do sleep 10; done"
    ports:
      - "5000:5000"
    networks:
      - front
      - inter
  c2:
    container_name: c2
    image: "centos:latest"
    entrypoint: /bin/bash -c "while sleep 10; do sleep 10; done"
    ports:
      - "5001:5001"
    networks:
      - inter
  c3:
    container_name: c3
    image: "centos:latest"
    entrypoint: /bin/bash -c "while sleep 10; do sleep 10; done"
    ports:
      - "5002:5002"
    networks:
      - back
      - inter
  c4:
    container_name: c4
    image: "centos:latest"
    entrypoint: /bin/bash -c "while sleep 10; do sleep 10; done"
    ports:
      - "5003:5003"
    networks:
      - back
      - inter
networks:
  front:
    name: front
    driver: bridge
    driver_opts:
      com.docker.network.bridge.name: dockerfront
  back:
    name: back
    driver: bridge
    driver_opts:
      com.docker.network.bridge.name: dockerback
  inter:
    name: inter
    driver: bridge
    internal: true
    driver_opts:
      com.docker.network.bridge.name: dockerinter

It creates one bridge ([docker]front) for Your public services, one bridge ([docker]back) for Your backend services and one internal bridge (that does not publish ports even when requested and thus can be safely added as additional network) ([docker]intern) and assignt these to the containers.
In effect c2 will not be reachable on port 5001 from outside so the ports can be omitted for that container. It's just to show the result:

# iptables -nvL DOCKER -t nat
Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  dockerback *       0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     all  --  dockerfront *       0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0
    0     0 DNAT       tcp  --  !dockerback *       0.0.0.0/0            0.0.0.0/0            tcp dpt:5002 to:172.27.0.2:5002
    0     0 DNAT       tcp  --  !dockerfront *       0.0.0.0/0            0.0.0.0/0            tcp dpt:5000 to:172.25.0.2:5000
    0     0 DNAT       tcp  --  !dockerback *       0.0.0.0/0            0.0.0.0/0            tcp dpt:5003 to:172.27.0.3:5003

Now You can add the access rule for dockerback bridge to DOCKER-USER chain:

# iptables -I DOCKER-USER ! -s 10.0.0.0/24 -o dockerback -j DROP
# iptables -nvL DOCKER-USER
Chain DOCKER-USER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    3   156 DROP       all  --  *      dockerback !10.0.0.0/24          0.0.0.0/0
   10   460 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Take care: the current version 19.03.3 has bug that DOCKER-USER chain is not created, it's already fixed for 19.03.4 which shall become available soon. If You have that version You can add the Chain by Yourself (has to be done after each restart of docker daemon):

iptables -N DOCKER-USER
iptables -I FORWARD -j DOCKER-USER
iptables -A DOCKER-USER -j RETURN 

Possibly You find a more appropriate place for such rules, but this is the way docker suggests. (I could think about fixating the networks for the bridges and pu the rule in a generic forward rule not applicable to docker manipulation on restarts, interface re-creates or such.)

In addition (or instead) You could use an additional IP on Your machine to bin the backend service to another IP then frontend services and filter at that level. In addiion means the exact same docker setup as above but additionally specifying com.docker.network.bridge.host_binding_ipv4 driver-option (which is just the default IP for published ports) - or instead use only one network and specify bind-address acccording to service type. Then block FORWARD in DOCKER-USER or wherever appropriate with conntrack modules ctorigdst matching for the incoming interface.

EOhm
  • 795
  • 2
  • 7
1

Ok, if you want your redis to be reachable only from specific hosts, you'll need to use iptables.

iptables -I FORWARD -i public_iface -p tcp --dport 6379 -j DROP
iptables -I FORWARD -i public_iface -s allowed_ip -p tcp --dport 6379 -j ACCEPT

As I don't know if you already have something in iptables, I chose to insert the rules at the top.

-I option inserts the rule at the top of the chain, so first insert the rule to deny all the traffic for redis, then insert another rule before this one to allow traffic from a specific host.

Here we use the FORWARD chain and not the INPUT chain because the traffic is not for the docker host directly.

You can check the result with

iptables -L

And you might want to look up how to make the firewalling rules reboot proof.

Ben-Banso
  • 11
  • 3
  • As much as I appreciate this, this still feels hacky. I had assumed that if you create multiple overlay networks that in docker you could simply open one network up to the outside world, or declare one network public. If I add more services in the future, I need to update iptables further. I'll still toss an upvote because this is an answer. – Blue Oct 24 '19 at 13:04
  • @FrankerZ but if the server you want to connect to redis is not a docker host, it IS an outside world server. – Ben-Banso Oct 24 '19 at 13:09
  • @FrankerZ if you want to add another redis server within docker, you could do it in a swarm cluster. In the same cluster, containers attached to the same network will be able to communicate without exposing the container publicly. Appart from that, I don't see how you could do – Ben-Banso Oct 24 '19 at 13:12
  • Not really looking for a cluster. Just want to expose only one network to the outside world, and the backend to a list of backend servers (That don't use docker). – Blue Oct 25 '19 at 00:23
1

Docker doesn't expose any network to the outside world, docker networks are used to allow containers to communicate between each other. EXPOSE also does not modify communication to the outside world, or even between containers, it is simply metadata that is most often used as documentation between the image creator and the individual deploying the containers.

What does allow communication to the outside world is publishing the port, which creates a port forward from the docker host network namespace into the container's listening port. Therefore, to achieve your goal, you should only publish the ports you want to have externally accessible. And then use docker networks to control communication between containers.

If you want to both publish a port and limit the access to the port using iptables, then using the DOCKER-USER chain is the recommended method. Note, I've seen several releases recently where this behavior was being patched so keep an eye on the release notes and open issues is you go this direction. Also note that the chain is run after the packet is manipulated for the container, and therefore has the container target port, rather than the host target port. To filter by host port, you'll need to use conntrack, e.g.:

iptables -I DOCKER-USER -i eth0 -s 10.0.0.0/24 -p tcp \
  -m conntrack --ctorigdstport 8080 -j ACCEPT
iptables -I DOCKER-USER -i eth0 ! -s 10.0.0.0/24 -p tcp \
  -m conntrack --ctorigdstport 8080 -j DROP
BMitch
  • 5,189
  • 1
  • 21
  • 30
0

Simply remove the "port" part in your redis container, it's only function is to expose it to the world. As you configured your 2 container to be in the backend network they will be able to communicate anyway

Ben-Banso
  • 11
  • 3