18

My scenario: I'd like to have my docker-based services know what IP is calling them from the web (a requirement of my application.)

I'm running on a cloud VPS, debian jessie, with docker 1.12.2.

I use nc in verbose mode to open port 8080.

docker run --rm -p "8080:8080" centos bash -c "yum install -y nc && nc -vv -kl -p 8080"

Say, the VPS has the domain example.com, and from my other machine, which let's say has the IP dead:::::beef I call

nc example.com 8080

The server says:

Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Listening on :::8080
Ncat: Listening on 0.0.0.0:8080
Ncat: Connection from 172.17.0.1.
Ncat: Connection from 172.17.0.1:49799.

172.17.0.1 is on the server's local network, and has nothing to do with my client, of course. In fact:

docker0   Link encap:Ethernet  HWaddr ....
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0

If I start the container in host networking mode

docker run --rm --net=host -p "8080:8080" centos bash -c "yum install -y nc && nc -vv -kl -p 8080"

and call my server again

nc example.com 8080

I get the expected result:

Ncat: Connection from dead:::::beef.
Ncat: Connection from dead:::::beef:55650.

My questions are:

  • Why does docker "mask" ips when not in host networking mode? I think the docker daemon process likely is the one opening the port, receives the connection, and then relays it to the container process using its own internal virtual network interface connection. nc running in the container thus only sees that the call comes from the docker daemon's IP.
  • (How) can I have my docker service know about outside IPs calling it without putting it all into host mode?

2 Answers2

18

According to the documentation, the default driver (i.e. when you don't specify --net=host in docker run, is the bridge network driver.

It's not about docker 'masking' the IP addresses, but on how bridged and host networking modes vary.

In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other.

Docker creates an isolated network so that each container in the bridged network can communicate, in your case it's docker0. So by default, the containers on your docker host communicate within this network.

As you might have already figured out by now, yes 172.17.0.1 is the default route on the docker0 network, but this does not act as a router that forwards packets to the destination, hence you see it as the source from netcat's output.

In fact, you can verify this by running ss -tulnp on your docker host. You should see that the process listening on port 8080 is docker.

On the other hand, using host networking driver means there is no isolation between the container and the host. You can verify this by running ss -tulnp on your docker host; you should see the process listening on socket instead of docker (in your case, you should see nc).

Lester
  • 567
  • 4
  • 16
  • 7
    I am just realizing now that I have responded to a 2 year old unanswered post. Even if OP no longer needs the answer, hopefully someone would still find this useful. – Lester Apr 13 '19 at 17:59
  • 4
    That is exactly why we allow and encourage answering old questions. 1600 people came here and found no answer. Now future people will. – Michael Hampton Apr 13 '19 at 18:41
  • Thanks for the answer, no matter if it's late. I don't know if I can figure out if this should be the accepted answer, as I've forgotten that I've ever had this problem. I'll see if I can figure it out and see if it solves the original issue. Thanks anyway! – Aleksandar Dimitrov Apr 15 '19 at 12:35
  • I have a similar problem, although the containers don't talk to each other directly: I have a prometheus and a grafana container on the same box, with a single nginx (not container) in front of them doing the TLS termination and routing. Grafana is using the public domain name of prometheus, however the nginx logs for prometheus show the request originating from the 172.17.0.0/24 subnet instead of the public IP of my server, which I find extremely confusing. – Sakis Vtdk Oct 30 '19 at 11:07
1

I had exact same issue and this solution from @TosoBoso works well for me with a Nginx reverse proxy in front of my web back-end. Echo's document also helped me better understand this problem.

Basically you'll rewrite headers (at least you need X-Real-IP) in your Nginx conf and then you can catch that later in your program.

    location /api/ {
      proxy_pass http://127.0.0.1:7006/api/;
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
    }
KuN
  • 213
  • 2
  • 9