1

I am building a service that requires clients to connect to it via a TCP port. It will be accessible over the Internet at some known port (say 9999). So, the clients would need to open a TCP connection to "myhost.com:9999".

Specifically, the service is targeted at web servers, including people running their apps on things like Heroku. My question is: How common is it for servers/hosts/providers to block outbound TCP connections?

I've seen this sometimes on AWS, but they tend to be super-restrictive with their VPC setups and so forth. I've never really seen it commonly done anywhere else, but my experience is pretty limited here. Does Heroku block outbound TCP connections? What about Azure cloud?

In short, if my service requires people to connect to my server via TCP at a specific port, how much of the world am I cutting out of my potential user pool?

Note: Before it's brought up, I'm planning on securing the TCP connection with SSL/TLS. I'm still a bit foggy on the details, but that part of the security puzzle is planned.

More Detail

I have a central server (S) and end users install a middleware layer that is the client (MW). MW will open a connection to S and periodically send/receive on it.

Clients don't need to implement or understand the protocol, they just install MW (a Rubygem in Ruby, npm package in Node, etc.) and provide a few config options. MW handles understanding the protocol and communicating.

Right now it's all handled with REST polling. It works, but seems kind of messy and unnecessarily verbose. S is written in Elixir, meaning it can theoretically handle a high number of open, idle connections. So, it seems like a good idea to use something other than REST polling.

Another choice here would be websockets, where MW connects to S via a websocket. Maybe that's the best choice practically, but it seems a bit strange to me that we're in a world where everything happens over port 80/443. Plus, I'm not sure how common it is to use websockets for server-to-server communication. They seem more oriented toward serving content to connected JavaScript clients.

Ultimately, my current REST polling solution works and will scale to a very high degree, way higher than I'll ever actually reach. I'm just curious about what it would take to "do it right".

Micah
  • 149
  • 6
  • 1
    You can at the very least expect that all company firewalls will block most ports other than 443 and 80 outgoing. – Jenny D Sep 14 '16 at 16:09
  • Home users should generally have no issues connecting however most security aware companies block outbound access for all but the most common ports - TCP 80/443, 53, maybe 22 etc. – Mark Riddell Sep 14 '16 at 17:54
  • this is pretty much why REST and SOAP exist, so that everything goes thru 80/443 – Neil McGuigan Sep 15 '16 at 18:44
  • 1
    Outbound 80/443 are more likely to be permitted. You can put a load balancer in front and redirect to your 9999 easily. – dmourati Sep 15 '16 at 19:15
  • I never thought about actually just using 80/443 for my protocol, like you mention. It looks like AWS ELBs do the port redirection and TCP/SSL termination that I want. That could be perfect! Thanks! – Micah Sep 15 '16 at 19:22
  • If your product has business value the connecting to your service should not be a problem. It would though be better to make it easy for people to connect. – user9517 Sep 15 '16 at 22:44
  • 1
    I would strongly advise against using 80 or 443 to send traffic unrecognizable to http systems, for two reasons. One, malware does this. Two, your middleware may legitimately get deployed behind a forward proxy that wont know what to do. Try very hard to find a "standard" protocol that works (again, smells like STOMP might fit the bill). Simpler for you and more attractive to customers. – Jonah Benton Sep 16 '16 at 12:59

1 Answers1

1

This question may get flagged as being too opinion based, but until it does-

My opinion, without knowing more details, is that a custom port is a major obstacle, not necessarily in the sense that it creates security hurdles, though it does that as well- but that what is essentially being asked of customers is to invent or code to a custom, bespoke, unknown application protocol.

Even if client libraries are shipped in one or another language, there will be folks that this approach leaves out. Heroku customers are precisely those who are not going to be implementing custom protocols.

Deploying a service on a bespoke port using a bespoke protocol that is aimed at the commodity Heroku crowd- absent other context information, this strikes me as a non-starter.

The question is- why can't it be deployed as a service that web clients can consume, either http or websocket? What's missing?

Jonah Benton
  • 1,242
  • 7
  • 13
  • Thanks for the comments and questions. I updated the original question to give some more detail on what I'm trying to do. – Micah Sep 15 '16 at 19:05
  • Are you aware of STOMP? https://stomp.github.io. very commonly deployed for what sounds like this use case. – Jonah Benton Sep 15 '16 at 19:34
  • STOMP is new to me. I was looking at Protocol Buffers for the over-the-wire message encoding, but I'm open to anything. – Micah Sep 15 '16 at 19:43