44

I'm developing an application over an intranet and is used only by an internal employee. There wouldn't be any external parties involved here and no external communication would be used by the application.

Does it need secure software design in this case? If so, will it be enough to follow the guideline of OWASP?

muru
  • 364
  • 1
  • 3
  • 14
Gaming
  • 541
  • 4
  • 4
  • 2
    This is very similar to another question about being lax on security when developing internal-facing applications: https://security.stackexchange.com/q/173901/63556 – ymbirtt Jan 30 '20 at 08:49
  • 4
    How many employees does your company have? The situation is different in a company with 3 people as it is in a company with 3000 people. – 12431234123412341234123 Jan 30 '20 at 15:51
  • "Should I allow and enforce running with knives since it's only going to be one person?" – MonkeyZeus Jan 30 '20 at 18:25
  • XSRF is an attack that will work against an internal-only web site. – Jacob Krall Jan 31 '20 at 02:48
  • 3
    Not "used only by". Rather, "**intended to be** used only by". That's a huge difference. In questions of security you're always dealing with unintended users of a system. – R.. GitHub STOP HELPING ICE Jan 31 '20 at 14:08
  • 1
    Well, I suppose you don't have to worry as much about DOS attacks. "Someone in Belarus is trying to DOS our server" vs "Tim in accounting is trying to DOS our intranet homepage." But, yeah, other than that... – Kevin Jan 31 '20 at 19:40

8 Answers8

86

While Kyle Fennell's answer is very good, I would like to offer a reason as to why it is recommended for internal applications to be designed securely.

A large number of attacks involve internal actors

There are many different versions of this factoid. "50% of all successful attacks begin internally", "Two thirds of all data breaches involve internal actors", etc.

One statistic I could find was Verizon's 2019 DBIR, in which they claim:

34% [of the analyzed data breaches] involved internal actors

Whatever the exact number may be, a significant amount of attacks involve internal actors. Therefore, basing your threat model on "it's internal, therefore it's safe" is a bad idea.

Secure Software Development does not just prevent abuse, but also misuse

  • Abuse: The user does something malicious for their own gain
  • Misuse: The user does something malicious because they don't know any better

The reason why I bring up misuse is because not everything that damages the company is done intentionally. Sometimes people make mistakes, and if people make mistakes, it's good if machines prevent those mistakes from having widespread consequences.

Imagine an application where all users are allowed to do everything (because setting up permissions takes a long time, wasn't thought of during development, etc.). One user makes a mistake and deletes everything. This brings the entire department to a grinding halt, while the IT gets a heart attack and sprints to the server room with last week's backup.

Now imagine the same application, but with a well-defined permission system. The user accidentally attempts to delete everything, but only deletes their own assigned tasks. Their own work comes to a halt, and IT merges the data from last week's backup with the current data. Two employees could not do any productive work today, instead of 30. That's a win for you.

"Internal" does not mean free from malicious actors

Some companies are technically one company with multiple teams, but they are fractured in a way that teams compete with each other, rather than working together. You may think this does not happen, but Microsoft was like this for a long time.

Imagine writing an application to be used internally by all teams. Can you imagine what would happen once an employee figures out you could lock out other employees for 30 minutes by running a script that he made? Employees from "that other team" would constantly be locked out of the application. The help desk would be busy for the 5th time this week trying to figure out why sometimes people would be locked out of the application.

You may think this is far-fetched, but you would be surprised how far some people would go to get that sweet sweet bonus at the end of the year for performing better than "the other team".

"Internal" does not stay "Internal"

Now, in 2020, your application will only be used by a small group of people. In 2029, the application will be used by some people internally, and some vendors, and some contractors as well. What if one of your vendors discovered a flaw in your application? What if they could see that one of their competitors gets much better conditions?

This is a situation you do not want to be in, and a situation that you could have prevented.

Re-Using Code from your "internal" application

You write an internal application that does some database access stuff. It works fine for years, and nobody ever complained. Now you have to write an application that accesses the same data, but externally. "Easy!", thinks the novice coder. "I'll just re-use the code that already exists."

And now you're stuck with an external application in which you can perform SQL injections. Because all of a sudden, the code that was created "for internal use only", no pun intended, is used externally. Avoid this by making the internal code fine in the first place.

Will it be enough to follow OWASP?

The answer to this question is another question "Enough for what?". This may sound nitpicky at first, but it illustrates the problem. What exactly do you want to protect?

Define a threat model for your application, which includes who you think could possibly be a threat for your application in what way, then find solutions for these individual threats. OWASP Top 10 may be enough for you, or it might not be.

  • 2
    Doesn’t have to involve malice to disrupt. On one of my jobs, it was a popular prank to hold the optical mouse up to the screen for a particular workstation model. A few seconds would fill up the mouse pulse buffer and the machine would be useless for a long time. – WGroleau Jan 29 '20 at 19:39
  • 9
    Ugh, back in school, we had this industrial engineering-simulation software that had been created decades ago and continuously ported/upgraded. Despite running on Windows 10 and being from a highly trusted source, we still had to isolate it in virtual machines to sandbox it. Because, for whatever weird reason, its ancient program logic tried to work with the file system in some buggy way that'd cause it to occasionally corrupt random files that had nothing to do with it. When they added cloud-computing features, I was scared it'd somehow find a way to accidentally install ransomware. – Nat Jan 30 '20 at 07:49
  • 2
    @WGroleau I would argue it's still malicious in a way, though in a very funny way –  Jan 30 '20 at 08:08
  • 2
    Most attacks work by first infecting an easy to target workstation (say: HR, Management, Warehouse) and then moving laterally. External actors can enter a network easily with a RAT attached to a convincing e-mail, so it's safe to assume there can also be external actors in the network. Poor maintained (w.r.t. security) applications and configurations then become weapons for the attackers – Margaret Bloom Jan 31 '20 at 10:07
  • @MargaretBloom Absolutely correct. I'll edit my answer to include that, if I don't forget –  Jan 31 '20 at 10:11
  • Note that OWASP are aware of the "Enough for what?" concern. One of the requirements for OWASP ASVS is to define a threat model. – James_pic Jan 31 '20 at 10:47
  • "Avoid this by making the internal code fine in the first place." You imply this, but to be explicit you need to both make the internal code fine in the first place, and double check everything, which (hopefully) includes writing new tests. – Drew Jan 31 '20 at 19:32
25

Yes, internal applications should be secured with due diligence and yes OWASP can be a good guide for securing your application. Also look over Microsoft's Security Development Lifecycle (SDL), It is a security assurance process that is focused on software development.

Why?

  • Defense in depth. An attacker could breach the network defenses. Put more layers of protection between them and your data.
  • External threats are not the only ones. Applications vulnerabilities can be exploited by internal threats as well.
Kyle Fennell
  • 921
  • 4
  • 12
6

Others already mentioned some good points about evil employees, infiltration, defense in depth... but it's much more practical than that. I can attack your internal intranet application from a random web page.

People click links all day. Sometimes because a colleague saw something they want to share, sometimes from search results (or ads), sometimes a cute cat picture with a thousand upvotes from a site like reddit, sometimes from phishing emails.

There are a lot of ways an attacker can get you to click a link. Let's pick the cat picture: For those thousand other people that upvoted the cute cat picture, it was harmless. Until someone clicks whose company uses the amazing intranet website that doesn't follow the OWASP guidelines.

Clicking links to malicious pages should be mostly harmless: the regular updates for your browser keep it secure and don't allow the website to access the rest of your computer. That's why it's so easy to make you click a link, because it is "mostly harmless". But that doesn't mean that having a page, that runs JavaScript code, inside the target company network is not an advantage to the attacker.

The page with the cat picture could contain something like this:

1. <img src=cute_cat.jpg>
2. <iframe name=hiddenframe style='display:none'></iframe>
3. <form action='http://intranet.local/addUser.php?username=joseph&password=123456' id=myform target='hiddenframe'>
4.     <input type=submit style='display:none'>
5. </form>
6. <script> document.getElementById('myform').submit() </script>

Upon opening the page, completely invisibly, this will be able to call the addUser.php page on your intranet application. If you are logged in (as you typically are while at work), the browser will happily add your login cookie (containing the session token by which the intranet recognizes that you are you). The attacker now has an account on your system. For people without the intranet application, it will just do nothing.

This is an example of a Cross-Site Request Forgery (CSRF) attack (plus a few other bad practices), which following the OWASP guidelines would prevent. A brief overview of what this code does:

  1. Show the cat picture to make the page seem harmless
  2. Add a hidden frame (sub-page) in which the intranet page will load.
  3. Add a form that will submit to your intranet, calling the addUser page with some username and password, picked by the attacker.
  4. Hidden submit button is necessary for the form to work.
  5. End of form.
  6. Call submit() on the form, so that the submit button triggers.

If the addUser.php page does not have (or check) anti-CSRF tokens, this attack is 100% possible and lots of sites were vulnerable to this in the past. One example? My school's intranet where grades were registered. I could have sent a teacher a link to a digital hand-in, and the page could (aside from showing my hand-in) have changed my (or anyone else's!) grades in the background.

It's still common today. Here is another, much simpler (and less harmful) example:

1. <img src='cute_cat.jpg'>
2. <img src='http://intranet.local/logout.php'>

This just calls the logout page. The browser expects an image from that logout.php page, but if there is no image (because it's a logout page), it just discards the result. Meanwhile, the intranet application logs you out. If the attacker manages to trigger this every 2 seconds from a tab that you keep open for a while, you might not be able to use the intranet because you keep being logged out.

Luc
  • 31,973
  • 8
  • 71
  • 135
5

Remember the giant Capital One breach in August 2019?

The root cause was a server-side request forgery (SSRF) vulnerability in an internal Capital One app.

So Yes, you need to worry about secure design on internal apps.

Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
3

What platform? Before I retired, I had to make sure anything I wrote could not fail to handle all exceptions. any unhandled exception would present to the user a pop-up begging them to send data to Microsoft which could contain personal information that Microsoft promises to not use.

Of course, most users will promptly click OK without reading. And whether or not Microsoft honors that promise, sending the data would make the hospital liable to prosecution under HIPAA. And HIPAA requires Microsoft to report us if they detect any patient information.

MacOS has a similar pop-up, and if the user doesn’t turn it off in settings in advance, IOS sends the data without asking.

And then there’s Android, coded by one of NSA’s biggest competitors.

So, the answer is “yes” for any of those platforms.

WGroleau
  • 217
  • 1
  • 6
2

Absolutely 100% yes.

For all the reasons given and one very important practical one: You never know on which day someone in management decides to put that thing on the Internet. "It works so well, our external contractors should use it." or some other reason.

You want to completely refactor it when that happens?

Tom
  • 10,124
  • 18
  • 51
1

A very common thing to happen in a company is for people to like using an internal tool, mention it to a partner or customer, and then there's clamoring for the tool to be made available to external users.

Yes, use some security precautions on the tool, and don't lock yourself out of securing it in the future. The simplest things go a long way, like "create a dedicated user instead of root for this process" and "restrict the user's and process' visibility only to things that the tool needs".

0

I am going to post somewhat of a blanket statement here, but if your application is professionally coded and follows best practices, it should be already fairly secure out of the box. At least the most common vulnerabilities such as SQL injection should not be exploitable.

And the development frameworks available nowadays actually make the job easier for you. On the other hand, if you prioritize speed of development over quality, if you are stuck with coding guideline from the 1990s, if you don't use parameterized queries... then you're asking for trouble.

At the very least you should pentest your application to make sure the most obvious mistakes are not present in your code, and that a script kiddie cannot compromise your system by launching an automated attack.

Like Tom says, stuff that is isolated today could be exposed on the Internet tomorrow, due to a management decision, or a router/firewall misconfiguration. The application might be exposed by accident, without you being aware, or after you left the company.

And you would be surprised at how bored employees spend their free time. I once found a port scanner on the workstation of some administrative clerk who is definitely not computer-literate. The tool didn't land there by accident. Too often, employees are the weak link in any organization.

Then the appropriate level of paranoia depends on which kinds of assets your intranet is giving access to. If the assets are rather sensitive, and the application gets hacked one day, your job could be on the line if the forensic investigation shows that your code was sloppy and did not comply with the bare minimum security practices. Worst case scenario is that you get sued by your employer/client for malpractice - it surely must happen from time to time.

I am wondering what happened to the IT guys who worked at Equifax ?

Consider the network topology too. If the intranet is hosted in-house and directly connected to your LAN, then it is a gateway to your LAN and other resources. If I am an attacker and I want to get into your system, I will be looking for weak spots, indirect but overlooked routes.

So I would rephrase the question like this: Under which circumstances one does not need secure software design ?

Think about your employer/client, but also think about your reputation. There is a good chance that one day, somebody else will look at your code. For example another IT guy who is tasked with migrating the application in the future, anything. Somebody who is maybe more knowledgeable than you are, and will not have anything nice to say when looking at your code.

Kate
  • 6,967
  • 20
  • 23
  • "but if your application is professionally coded and follows best practices" <- This is rarely true, and even if it was this: "it should be already fairly secure out of the box." does not follow. Security vulnerabilities are effectively just bugs, and even "professionals" following best practices won't stop them from happening. Especially if you have decided that security is not important. – Conor Mancone Feb 28 '20 at 15:37