4

I need to find a way to have 1 IP that is used by all pods everytime they need to connect to the "outside world".

FULL VERSION:

I'm trying to integrate my application with a Payments Gateway service. However, this service needs to whitelist my servers IP, refusing all other connections for security.

Now, I'm running a GKE (3, actually) cluster with Kubernetes 1.5.2. In this cluster I have around 30 pods and I need 1 in specific to route it's Internet directed traffic through a static, predicatable IP address.

Right now, I have to give a list of my cluster instances external IPs to be whitelisted but this is a problem.

The cluster is setup to autoscale up to 5 instances and also all these instances have an ephemeral IP and 1 - I DON'T want to be forced into turning all these into static IPs. 2 - I also DON'T want to be forced to expose that particular pod through an external endpoint making it available for Internet-to-cluster directed connections.

Is there any way I can say/configure:

- This pod forwards all it's Internet directed connections through X endpoint?
  Obviously, this should be something easy to configure to work with 1 pod
or with all of them I so desired.

What's the correct course of action here? How can I achieve this?

I've referenced this SO question and the Source IP docs on Kubernetes as well as this instructions on how to setup a NAT Gateway (which, given the flexible cluster config, I don't think would work)

Zed_Blade
  • 103
  • 1
  • 6
  • 1
    Did you find any solution for this? I got the absolutely same obstacle on this with MongoAtlas whitelist. The only solution in this case is Allowing all IP or Making all IP static, but this should be a frustrated solution. – Tokenyet Feb 02 '19 at 12:16
  • @Tokenyet essentially what you need is called an Egress IP, something that did not exist (or barely) at the time this question was placed and to be honest I'm not sure if they exist nowadays on Kubernetes, even though Openshift, for instance, has that concept – Zed_Blade Feb 26 '19 at 16:04

2 Answers2

2

The only way that is doable, is a NAT gateway.

I assume you are using a kind of HTTP API (REST API) and that needs TCP. TCP needs to finish a handshake so you need to know which node sent the packets to find the way back. That is why NAT is needed.

The instructions you found about how to setup a NAT Gatway should work. You just need to tell your containers to use the NAT instance as gateway.

Christopher Perrin
  • 4,741
  • 17
  • 32
  • Hi Christopher. Thank you for your reply. My main question regarding to the instructions found is that is says that the cluster instances should have a no-ip flag (or something similar) but I don't see a way to do that when creating the instance pool. You're saying I should expose that NAT gateway as a pod on my cluster or that I should have a dedicated VM that is exposed on the cluster as a service? – Zed_Blade Mar 07 '17 at 09:45
  • 1
    A dedicated instance is needed that is set up to nat traffic. The no-ip thing is about tagging. In step 6 he addes a rout through the NAT instance for every instance that is tagged with no-ip. The manual is not a 100% about kubernetes. You have to add a route in your pod definition that routes the default route through the instance. – Christopher Perrin Mar 08 '17 at 11:45
  • @christoper thank you :). As I'm a beginner with kubernetes would it be too much trouble if I asked you how to configure the pod routes to use that NAT server? You can add that as a response so that the bounty can be assigned upon acceptance. Best Regards – Zed_Blade Mar 08 '17 at 11:49
2

This is not possible yet on GCP. Although it is a bit hacky, my recommendation is to setup an http proxy on a non GKE instance with a static IP address. Then when you use the payments gateway, use the http proxy so you are hitting it from the correct IP address.

Make sure the IP address is NOT ephemeral.

Make an image of your proxy VM. If it goes down, you can bring up a node in any zone in the same region to act as the proxy. You can also move the IP address between instances although that will kill current connections so you should make sure your code will retry on failure. Of course, you should always make sure your code will retry on failure.

Stephen
  • 345
  • 1
  • 7