29

I received a report from the security team today. The report contains the below mentioned vulnerabilities and descriptions:

1) Poor Error Handling: Overly Broad Throws

The methods in program1.java throws a generic exception making it harder for callers to do a good job of error handling and recovery.

2) Poor Error Handling: Overly Broad Catch

The catch block at EXAMPLE1.java line 146 handles a broad swath of exceptions, potentially trapping dissimilar issues or problems that should not be dealt with at this point in the program.

3) Poor Error Handling: Empty Catch Block

The method SomeMethod() in somefile.java ignores an exception on line 33, which could cause the program to overlook unexpected states and conditions.

After reading the report above, a few questions are raised in my mind:

  1. How are these related to security? According to my understanding it seems like the above issues are code quality issues.

  2. If those are real security threats, then how can an attacker exploit this?

  3. What kind of protection controls are need to mitigate above issues?

Anders
  • 64,406
  • 24
  • 178
  • 215
useradmin1234
  • 449
  • 1
  • 7
  • 9
  • 47
    If this is from your security team, why not ask them? They are either running a tool they do not understand or they have specific goals and requirements that these findings relate to. Either way, it sounds like a misalignment in expectations somewhere. – schroeder Jan 16 '20 at 12:13
  • @schroeder, Given justification are already mentioned in my question. – useradmin1234 Jan 16 '20 at 12:22
  • 3
    But, as you say, those aren't security concerns (else why are you asking here, right?) – schroeder Jan 16 '20 at 12:32
  • 3
    @useradmin1234 The justifications you are given are straight out of Fortify results. This sounds more like lazy code review job to me. What's the risk rating they have given for this. (Perhaps I am shooting my own foot with this advice) You should ask them to clarify the security impact of vulnerability in terms of Confidentiality, Integrity and Availability or any such security properties. – hax Jan 16 '20 at 15:48
  • 24
    These are not *directly* security issues. These are serious code quality issues that prevent *stupid* security issues from getting noticed in time. Stupid, as in "Yeah, we had guarded against that CSRF attack, and our code noticed that a token was missing, but the exception got caught and ignored, so the attack got through, and that's why the President is now registered as a sex offender." – Bass Jan 17 '20 at 01:49
  • 3
    Most security issues stem directly from code-quality issues. Sorry, no citation :/ – Christopher Schultz Jan 17 '20 at 03:34
  • 1
    I'd argue that in some cases faulty error handling leads to unexpected behaviour, which can then lead security issues. – PaulHK Jan 17 '20 at 05:47
  • @Bass It depends on the specific case probably. If somefile.java line 33 is something like `i = -1; try {i = Integer.parseInt(str);} catch(NumberFormatException ex) {}` then it's probably acceptable (though you can put `i = -1;` in the catch block to slightly enhance readability). (And it shouldn't all be on one line of course :) – user253751 Jan 17 '20 at 11:47
  • 1
    The empty catch block could be exploited by taking advantage of it to allow bogus data to be fed into the system. – Hot Licks Jan 17 '20 at 23:34
  • These are results that warrant a closer look, but aren't necessarily security issues. If this is what you get back from an audit, it was a lousy audit. They should've dug deeper or not mention it at all. – Mast Jan 19 '20 at 08:50
  • @hax If these warnings seem to have come from an automated tool, they're the hallmark of a coder who is too slipshod or lazy to run the same tool themselves. As such, their code absolutely should be pushed back to get them to get their act together. – Graham Jan 19 '20 at 18:40
  • @Graham Typically developers may not have access to those tools. I don't disagree with the fact that it is lazy coding. But more seriously this is lazy auditing. – hax Jan 20 '20 at 06:32

6 Answers6

49

I think you are correct that those issues are more related to code quality rather than security, and none of them are exploitable in any obvious way. I would not call them "vulnerabilities".

But vulnerabilities are born out of bugs, and bugs are born out of bad code quality. Bad error handling could lead to unexpected results - i.e. a bug - and if you are unlucky that could lead to vulnerabilities.

Here are some examples related to exception handling:

  • If you don't handle an exception at all, it could lead to the program crashing. This could be used for a denial of service attack - simply overflow the server with malicious requests that will crash it, and it will spend all it's time restarting instead of serving legitimate requests.
  • Unhandled exception might lead to disclosure of sensitive information, if error messages are passed on in HTTP responses (as pointed out by Rich Moss).
  • Overly broad catch statement that catch everything could lead to security critical exceptions being glossed over.

For real examples, check out this CVE list filtered on the word "exception" (as suggested by Ben Hocking).

The solution is to follow best prectices for exception handling. I will not cover that in detail here since that is more of a programming than a security issue.

Anders
  • 64,406
  • 24
  • 178
  • 215
  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/103464/discussion-on-answer-by-anders-poor-error-handling-source-code-review). – schroeder Jan 18 '20 at 16:27
11

How are these related to security? According to my understanding it seems like the above issues are code quality issues.

These are code quality issues. From the name, it looks like they are straight out of Fortify. Fortify has the ability to look for code quality issues and it looks like your security team enabled those rules while running the scans.

If those are real security threats, then how can an attacker exploit this?

I do not think any of the issues listed are exploitable in a meaningful way. The only impact I can foresee is that this would make it extremely difficult for you to debug your code.

What kind of protection controls are need to mitigate above issues?

The issues are self explanatory. Follow the exception handling best practices.

hax
  • 3,851
  • 1
  • 16
  • 34
  • 9
    +1, although there can be subtleties. For instance if a cryptographic library throws an exception because of unexpected input (perhaps due to improperly validated input) and then the exception is ignored because of an empty catch block, then the system may inadvertently continue with a dangerous action when it should have died. On the one hand that's a bit of a stretch, but on the other hand things like that do happen in practice. I agree that these are code quality issues more than security issues, but code quality issues often lead to security issues. – Conor Mancone Jan 16 '20 at 14:00
  • 2
    Agree that it may sometimes result in security impact too. – hax Jan 16 '20 at 14:02
6

Bad guys start an attack by learning as much as possible about the target system. An improperly handled exception can reveal sensitive information to the calling client.

In a REST API for example, a typical successful GET response would include requested object and a 200 HTTP status code, but little else about the server implementation. A response for a GET request of an non-existent object should return a 404 HTTP status code and no response body. That's proper error handling.

In exception scenarios, a 500 (server error) status code should be returned and the response body may contain some information about the source of the exception: a connectivity issue, database unique key violation, etc. This type of response can contain information about the system that a bad guy could use, including stack traces, server technologies, software libraries and vendor names. Proper exception handling can prevent this data being revealed to the calling client, or otherwise traversing the layers of software to be stored in unnecessary logs.

Remember that data proliferation is one of the enemies of a secure system. As more exception details are stored in redundant logs, the attack surface to find that data widens. So keep the excess noise out of your logs and server responses.

The message from your security reviewers is to handle all known exception scenarios like a normal flow. I'd guess an overly broad throw indicates the code is throwing a generic exception instead of a typed exception, which should be caught as a typed exception to handle the overly broad catch. In the REST API example the caller may opt to return an empty response if this exception is a known/frequent timeout issue on the server side, and instruct the user to try again or check activity. In contrast, an exception with invalid data provided like '<script..>' might require additional levels of alerting, possible user lockout and an empty response.

Rich Moss
  • 177
  • 2
  • 3
    I disagree with a lot of this. First, a 404 on a REST API doesn't require an empty response body. In particular a JSON API should always return JSON, so an empty response body just for a 404 can cause problems. More importantly, for a 500 the response body the server should *never* return some information about the source of the error (at least in production), so is not relevant here. – Conor Mancone Jan 16 '20 at 21:14
  • 1
    Thanks for sharing your understanding of REST API. Mine is based on training and personal experience connecting to APIs, some of which have included exception details in a 500 response body, and empty 404 response body [by design](https://stackoverflow.com/questions/12806386/standard-json-api-response-format). However, the REST API was intended as a concrete example of how exception handling could expose sensitive data, and was not intended to start a tangential discussion on REST fundamentals. I'm new here - your feedback is appreciated if you think this answer could be improved otherwise. – Rich Moss Jan 16 '20 at 23:16
  • 1
    There are definitely APIs out there that show internal details in the event of an error, but that doesn't have anything to do with REST APIs. There are "regular" websites that do that too. However, in all cases this is already a bad practice. See [this](https://security.stackexchange.com/questions/19130/is-a-stack-trace-of-a-server-application-a-vulnerability). Exception catching doesn't have anything to do with tracebacks because whether or not error details are displayed is typically a global application setting, an uncaught exception won't change anything – Conor Mancone Jan 16 '20 at 23:55
6

I have been dealing with code audits, security analysis like this one and ethical hacking runs for a decade and a half now, so let me share some of my experience.

Every single boss and team lead I've had that I saw as a role model insisted on having high standards for code quality, not just to make code reviews easier but also in name of stability, maintainability and security. Devs who do exception handling like your team either have little experience, have little time or simply don't care. And this kind of lazy coding can lead to disaster.

The most glaring one to my eyes is your problem number 3, an empty catch. That masks whatever error passes through it, not even logging it. Now suppose the following happens:

try {
    // start a transaction
} catch (Exception ex) {
}
finally {
    // commit the transaction
}

You could be putting data into the database that is invalid for the business. For a more elaborate example, suppose a type Account with a property funds, such that funds is an unsigned integer (because you don't want to allow an account with less than zero money in it):

try {
    // moving funds between accounts
    Transaction.start();
    accountA.funds += amount;
    accountB.funds -= amount; // throws an overflow exception if funds < amount
} catch (Exception ex) {
}
finally {
    Transaction.commit();
}

The end result can be money going to account A, but not being subtracted from account B.

This may sound stupid and beneath your team for you, but I've had to deal with such things in legacy code at some companies I've worked for. I'm not saying that your solution has this kind of code in it, but I am saying that if you have lazy or disgruntled staff don't be surprised to find code like that. And never trust your codebase to be fully secure if you are getting a report like the one you did.

Let me go one step ahead and give you an example in which bad error handling really soured the day for a lot of people. One of the most expensive coding errors in history was that of the Ariane 5 rocket's first test flight.

Ariane 5's first test flight (Ariane 5 Flight 501) on 4 June 1996 failed, with the rocket self-destructing 37 seconds after launch because of a malfunction in the control software. A data conversion from 64-bit floating point value to 16-bit signed integer value to be stored in a variable representing horizontal bias caused a processor trap (operand error) because the floating point value was too large to be represented by a 16-bit signed integer. The software was originally written for the Ariane 4 where efficiency considerations (the computer running the software had an 80% maximum workload requirement) led to four variables being protected with a handler while three others, including the horizontal bias variable, were left unprotected because it was thought that they were "physically limited or that there was a large margin of safety".

Emphasis mine. If I remember correctly that was a 370 million dollars failure. Granted, you probably aren't coding for NASA or DARPA, so your losses due to bad coding should be mostly staff time due to rework rather than exploding machinery. But losses are losses just the same, and if you can avoid them by being just a bit more careful you'd better at least consider it.

Script Kid
  • 215
  • 1
  • 9
2

These kind of code quality issues may lead to security issues depending how it's implemented. That's the reason why we have SAST tools in placed to detect this kind of issues at early stage of development.

You need to improve your error handling.

Winnnn
  • 41
  • 3
0

Lack of specificity means that there are potentially more paths through the program, and the more paths there are, the harder it is to verify its properties, including its security. This decreases one's confidence in its security.

In particular, a broad catch may make it possible for a user to force a jump to the catch block from more places than was intended, so firstly, there is extra work in verifying whether this is possible, and if so, there is all the work in figuring all the consequences from then on. Furthermore, this adds a verification burden to every subsequent change. It is not a defense to say that it will rarely have consequences, as you still either have to do at least some additional verification, or accept that the code will be less well-verified than it could have been if a few basic principles were followed.

sdenham
  • 101
  • 2