3

We deploy our internet facing applications in multiple vlans and there is a rule that talking from one vlan to the next has to be done in another protocol or an other implementation of the protocol.

E.g.

[Internet] --https-> [apache@VLAN1] --ajp--> [tomcat@VLAN2] --jdbc--> [pgsql@VLAN3]

Vs.

[Internet] --https-> [apache@VLAN1] --https--> [apache@VLAN2] --https--> [apache@VLAN3]

Vs.

[Internet] --https-> [apache@VLAN1] --https--> [tomcat@VLAN2] --https--> [nginx@VLAN3]

The reason behind is if there is a exploit in one of the protocol implementations you cannot use the same exploit to break into the host in the next VLAN.

This is sometimes hard to achieve if all the services are providing REST APIs.

Is there any literature where I could read about this? Or approaches that achieve the same protection.

n3utrino
  • 131
  • 3
  • On the other hand, by using multiple protocols, the communication is no stronger than the weakest one. – Neil Smithline Apr 17 '18 at 16:10
  • Its not about the protocol safety its about breaking into the host that handles the protocol. I made some edits to clarify – n3utrino Apr 17 '18 at 20:43

2 Answers2

5

Switching protocols as you describe essentially requires to analyze the syntax and semantics of the transferred data in order to translate the data into a new protocol. It also means that you need to have clearly defined semantics in the first place. Having clearly defined semantics itself can already improve security. And that these semantics gets implicitly enforced when translating the data into a new protocol additionally lessens the ability of an attacker to exploiting the recipient.

Defining and enforcing the semantics could also be done without translating into a new protocol. But to translate into a different protocol a more strict definition of syntax and semantic is needed and it is harder to make shortcuts to skip some checks. Insofar requiring protocol switching is a neat idea in order to make sure that developers actually know how the data should look like in the first place and that they also enforce it.

Of course, this only works if there is a real translation necessary. This would in your example be the case between HTTP and JDBC but not between HTTPS and HTTP. Since HTTPS is just HTTP over TLS and it would be enough to strip the TLS layer instead of enforcing protocol semantics.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • Its not about the protocol safety its about breaking into the host that handles the protocol. So semantics and syntax could be irrelevant. I made some edits to clarify. You are right tho with the semantics and syntax check but that solves another problem. – n3utrino Apr 17 '18 at 20:47
  • 1
    @n3utrino: Exploits are essentially triggered through unexpected data which are handled the wrong way because they were not expected. Clearly defining what is expected in the first place and then enforcing these expectations thus reduces the chance of exploits. – Steffen Ullrich Apr 18 '18 at 03:25
  • You are absolutely right. But in my second example the exploit would happen before i had a chance to validate. All apaches would get breached with the same exploit one after the other if the attacker gets root on the first he then can use the exact same exploit to breach tthe next. Rendering the segmentation useless. – n3utrino Apr 18 '18 at 04:26
  • @n3utrino: Translating protocols is not a protection against all bugs. But it is less likely that the attacker can execute code by exploiting a bug in a widely used HTTP protocol stack. It is way more likely that a bug hides in the implementation of your specific REST service, i.e. not at the syntax of HTTP (which Apache cares about) but in the semantics on how you use HTTP (i.e. your REST API). – Steffen Ullrich Apr 18 '18 at 04:40
  • I totally understand. But assuming all application level security measures are implemented (inspection, validation, sanitizing, serializing, etc.). In your opinion is doing the protocol switches worth implementing? Would switching the implementation (apache, nginx, tomcat) of the same protocol offer equal protection? – n3utrino Apr 18 '18 at 06:02
  • 1
    @n3utrino: like I said in my answer - this can be accomplished without translating to a new protocol. But, switching protocols makes it much harder to cheat in validation, i.e. deliberately or inadvertently take shortcuts in validation and thus not properly validating semantics. – Steffen Ullrich Apr 18 '18 at 06:22
0

This is meant to be a comment. I wish I could comment (not quite yet), but based on the example above, I would say there doesn't seem to be a whole lot of benefit going from an encrypted protocol (https) to another plain text protocol (http) within your network to a (possibly) unencrypted one (jdbc). That it is encrypted to start with if good! lol. And it is not always possible to force protocols. You may be stuck with JDBC.

But if they crack any part of the food chain there, they get the data, even if each segment is encrypted differently, like if where you have VLAN1 it is actually a load balancer that is doing SSL offload and the connection is still HTTPS to VLAN2 but different keys and SSL session, then encrypted from the app tier with a secure JDBC (which from the App tier is probably not a whole lot of choice for protocols).

Encrypted end to end, yes. Always desirable, but I wonder if that is what they meant instead.

[Internet] --https offload-> [VLAN1] --https new session--> [VLAN2] --secure jdbc--> [VLAN3]

That would stop mitm.

  • Getting the Data is not the problem we try to aviod with protocol switching, because if you re on the first machine you own all the sessions. The problem we try to avoid is getting into our inner net where a lot of services would be available. – n3utrino Apr 17 '18 at 15:01
  • aside: Finally, I can add a comment. Not sure how that helps "secure" the access. Especially if you are using non-proprietary protocols. So, one server's http is a lot like any other server's http. we use a load balancer at work and if we really wanted to we could require certificate authentication between it and the server LAN side. Is that the sort of the desired result here? There are smarter people than me here, but I am not seeing how your company's policy helps at all. – IGotAHeadache Apr 17 '18 at 15:09
  • I think i worded my question poorly. It is not about the protocols but the implementation. If we would use the same version of apache for all hops then an exploit in apache handling the request would compromise the whole chain – n3utrino Apr 17 '18 at 20:54
  • @n3utrino I wouldn't depend on that at the infrastructure level. Once any part is owned, not much you can do to protect the rest of that circuit. harden your infrastructure, protocols, secured "versions" across the board. Easier to secure the same than everything different. They are better than I am, trying to patch/secure 50 things properly, get really good at securing one or two or just a few (really secured os and apache and protocols) .Nothing saves you from a badly implemented application, btw. I would argue that switching infrastructure is theatre before anything else. – IGotAHeadache Apr 18 '18 at 17:31