46

Many security measures are intended to protect against hostile users who want to abuse the software or get access to content they don't have permission to access. Things like CSRF protection, SQLi protection, TLS and many other security features mainly protect against malicious users. But what if all the users can be trusted?

Suppose you have a fully internal web application that will only ever run on the intranet of the company and will never be accessible from the outside. assume that all the internal users can be trusted, there are no outside users and the data inside the application is of not much use to attackers. This means the threat model is very limited and there is not much sensitive data.

Considering these details, it seems like some of the measures, like TLS and XSS protection, wouldn't be as important. After all, there is very little risk of attackers intercepting traffic, and the users can be trusted to not enter XSS payloads. In this case, would it still make sense to implement security measures against traffic interception or malicious users?

Nzall
  • 7,313
  • 6
  • 29
  • 45
  • 3
    It's possible to do some XSS attack even for internal sites that the attacker don't actually have access to the website being attacked. Yes, it may be more difficult to craft an attack without being able to test it, but it may not actually be that difficult for off the shelf or open source application. – Lie Ryan Mar 14 '17 at 11:48
  • 17
    So you trust all your internal users to never get hacked and never catch malware? – CodesInChaos Mar 14 '17 at 16:27
  • 22
    "all internal users can be trusted" - no they can't. users are infinitely stupid, and can fall for social engineering attacks. see e.g. attacks where users are tricked into pasting stuff into JavaScript consoles. – strugee Mar 14 '17 at 23:21
  • 29
    **You can't trust all the users.** – Michael Hampton Mar 14 '17 at 23:44
  • 7
    I don't need to be able to steal your car if all I really wanted to do was set fire to it. – Nohbdy Mar 15 '17 at 04:03
  • @SeanBoddy I think (e-)mailing somebody a matchbox with "bug fix" on it would be sufficient. – wizzwizz4 Mar 15 '17 at 07:17
  • "the data inside the application is of not much use to attackers" Everyone else is already calling out the problems with your other assumptions, but this one is rather large as well. If you have *any* credentials in the system, you can pretty much categorically assume this is false. (They can impersonate a user by capturing them.) You might also be underestimating the value of information to an attacker. I would assume this is *not* the case unless we're talking about a fully static HTML site, but your question says "web application," implying it's not. – jpmc26 Mar 15 '17 at 07:19
  • Generally security has to be cost effecient and withhin the budget. Most of the time the company is not big enough to care about this or willing to waste the money. Afterall there are logs and if access to computers is bound to user accounts and everyone logs out when inactive, then noone can commit a crime without evidence, which takes away the appeal of commiting it in the first place. – HopefullyHelpful Mar 15 '17 at 16:02
  • 2
    @MichaelHampton **Addendum: You can't trust *any* user. Not even yourself.** – xDaizu Mar 15 '17 at 16:52
  • One of the specific types of attacks you mention, CSRF, is trivial for a malicious internet site to perform on your LAN/intranet sites. – R.. GitHub STOP HELPING ICE Mar 15 '17 at 23:08
  • @jpmc26 Credentials are managed through whatever system the client uses for their network credentials management system. All other data is related to build automation and continuous delivery, but even then, it's purely metadata needed to know what, where and how to build or deploy it. There is very little value in knowing this metadata, because even if you know it, there's still not much you can do with it. – Nzall Mar 16 '17 at 13:29

5 Answers5

65

Yes. Absolutely, yes.

Your assumptions about your internal network have issues:

More generally, there is also the matter of why have two sets of practice/standards, when surely it is more efficient to have a single standard that applies everywhere?

You might find it useful to read Google's paper on BeyondCorp, https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44860.pdf.

The tl;dr being that in their conception of the network, you make assertions about users and devices, but not about the network - mostly because it is simpler to assume all networks are hostile, than it is to assume some are, and some are not (in part, the cost of misclassifying a network as safe could be very, very high).

One possible reason for such an approach is that the Snowden leaks revealed that previous assumptions about the safety of their network were incorrect - the NSA tapped into fiber in order to tap into (at the time unencrypted) inter-DC data flows.

I think the basic answer to your question is that the boundary/demarcation point for security is no longer at the edge of your network, it is the devices on your network. And as such, it is both simpler, and more realistic, to focus on preventing categories of attacks/abuse, rather than to consider that one network is 'better' than another. You may not need quite such strong controls on an internal DMZ as you would on an external one, but assuming that your network is secure is a dangerous assumption to make.

iwaseatenbyagrue
  • 3,631
  • 1
  • 12
  • 24
  • 9
    And if you say "my internal network is so secure that all of these issues are already handled reliably", then you're clearly in an arena where heightened security is justified and so shouldn't be discarding requirements! – David Schwartz Mar 14 '17 at 18:15
  • 1
    The assumption that all users are trusted can also rapidly fail given sufficient financial incentive or a grievance against the company - which may be as simple as a perceived unfairness in promotion. – pwdst Mar 14 '17 at 19:30
  • 2
    For cloud systems, Google offers a service called "Identity-Aware Proxy" which might be of interest here (basically a BeyondCorp implementation for the masses). Disclosure: I work for Google and will be SRE'ing this service in the near future. – Kevin Mar 14 '17 at 22:30
20

The attack surface on the internal network and external network is different which means that different security measures are appropriate. That does not mean that the attack surface in the internal network is smaller because on one side users are usually more trusted and on the other side there are more critical data which are often easy to access from inside.

Even if all users can be trusted it is still possible that their system gets infected with malware. Apart from that many of the attacks you've mentioned like CSRF, SQLi or XSS can be done cross-origin, i.e. it is enough for an internal user to visit an external web site which then uses the internal browser as a trampoline to attack internal systems.

In summary: proper protection is needed for internal networks too, even if all users can be trusted. This is especially true if it is possible to access both the internal network and the internet from the same system because this allows cross-origin attacks from the internet against internal systems.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • CSRF is explicitly an attack against an unsuspecting user, by a malicious external page. (See the "CROSS SITE" part) an external page is visited by the user, includes JS whcih sends a malicious request to your internal application from the users browser and this request contains a "; DROP DB;"... – Falco Mar 15 '17 at 12:49
1

I would say no, mainly due to this quote from the original post specified:

there are no outside users and the data inside the application is of not much use to attackers

The main consideration is that, even in an internal network, hostile actors can still compromise systems that can then be used to gain access to your web application.

Your threat model I think is still an important consideration here, and despite the concerns that an application can be broken into, if all you're protecting is Joe's holiday schedule and Sally's party invitations, it may not be worth implementing HSTS, HPKP and XSS filtering etc.

Most malware that will infect local machines is not likely going to be designed to run a network scan and find intranet webservers. Those that are, probably will be going to be looking for known packages (although some will just look for common names and blast every form it can find with known exploits)

This is similar to security through obscurity, and is definitely bad practice. However, practical concerns will outweigh ideals in many scenarios. I would still recommend at least a self-signed certificate and HTTPS/TLS though.

A system like this can not survive a targeted attack, but since you've eliminated the most common attack surface (public internet access), the bulk of automated misuse will not find your site.

Waddles
  • 169
  • 3
1

A few additions to the excellent answers:

  • a lot of the stuff you should do to protect against XSS (properly encode data when you display it, mostly) is also needed to prevent a variety of bugs that can be triggered by perfectly innocent input (you don't want a text field to be broken just because it contains a < or & in the wrong place). The same applies for SQL injections (you don't want a query to break juste because there's a quote in a field). So you need to do that stuff anyway, even if it's not for security.

  • there is a strong tendency for browsers to become more and more restrictive about non-TLS sites, to the point that they might become quite unusable in the near future (or at the very least display so many warnings that it will frighten your users).

  • also, even if you are only targeting internal users on an internal network and they could be fully trusted (see other answers for reasons why they shouldn't), things may change in the future. You may need to open up (parts of) the site to external users (partners, suppliers, customers...). It's much easier to take the right measures when you are doing the initial development than to retrofit security at a later time.

jcaron
  • 3,365
  • 2
  • 15
  • 22
0

Make a Threat Model. Depending on the data stored and the threats you identify, you may find that you need different security standards or that, all things considered, there's not actually any crucial difference.

Answering the question in general terms can go this way or that, depending on the assumptions taken, as you can already see in the answers given. But in the end, LAN or public Internet is only one variable in the set and you can't solve x = y + z with only one of the variables given.

Tom
  • 10,124
  • 18
  • 51