39

I'm writing a simple REST API, and I want to restrict access to my mobile-client only. In other words, I'm trying to prevent a malicious user from e.g. using curl to make an unauthorized POST request.

Of course, this is impossible. However, there are certain countermeasures that make it difficult for a hacker to succeed. Right now, I am encrypting all requests with a private key, stored client-side (obviously, this is not ideal, but the difficulty in reverse-engineering an iOS app will hopefully deter all but the most determined hackers).

One simple idea I had is to return the wrong HTTP response code for an unauthorized request. Rather than return a "401 Unauthorized," why not return e.g. "305 Use Proxy," i.e. purposely being confusing. Has anyone ever thought about doing this?

Miles
  • 501
  • 1
  • 4
  • 6
  • 74
    It's called "security by obscurity". It can slow someone down, but that's about it. – Matthew Feb 08 '17 at 18:30
  • 74
    For what its worth, I read somewhere in an official document on HTTP (sorry, don't remember the source), that returning 404 instead of 401 is permissible so as not to leak information about resource existence to unauthorized clients. – Out of Band Feb 08 '17 at 18:48
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/53456/discussion-on-question-by-mpl-returning-the-wrong-http-response-code-on-purpose). – Rory Alsop Feb 11 '17 at 11:58
  • 9
    If you are not returning the correct HTTP status codes, one might say that strictly speaking, you are not speaking HTTP. So once you've gone so far, you might simply implement a completely different protocol of your own ... – Hagen von Eitzen Feb 11 '17 at 12:23
  • 4
    @Pascal this is the approach taken by GitHub (at least) when attempting to access a private repository when not authenticated or unauthorized. – Jules Feb 11 '17 at 20:38
  • 1
    @Pascal You're thinking of using 404 in place of 403. [The HTTP standard](https://tools.ietf.org/html/rfc7231#section-6.5.3) explicitly allows this to avoid revealing the existence of a resource to a user that isn't allowed to access it. You can keep a 401 from leaking info about existence if you require it for *all* locations that match the pattern of other locations that require authentication. (I hope reposting this comment is okay; I think this is important info.) – jpmc26 Feb 13 '17 at 06:28

11 Answers11

84

Has anyone ever thought about doing this?

Yes, there was actually a talk about exactly this at defcon 21 (video, slides).

Their conclusion was that working with response codes as offensive security can sometimes result in severely slowing down automatic scanners, non-working scanners, and a massive amount of false-positives or false-negatives (it will obviously do little to nothing for manual scans).

While security by obscurity should never be your only defense, it can be beneficial as defense in depth (another example: it is recommended to not broadcast version numbers of all used components).

On the other hand, a REST API should be as clean as possible, and replying with purposely wrong HTTP codes may be confusing for developers and legitimate clients (this is a bit less of a problem for browsers, where users don't actually see the codes). Because of this I wouldn't recommend it in your case, but it is still an interesting idea.

tim
  • 29,018
  • 7
  • 95
  • 119
  • 3
    While I do acknowledge the legitimacy of 'security by obscurity' as a barrier to entry (i.e. kicking out script kiddies), I think it's so widely overused that it's severely detrimental to the actual application's security. Of the two related fields - cryptography and information security - one relies on 'obscurity', while the other does not. Incidentally, the one that **does** gets pwned easier most of the time. Furthermore, obscurity applies to all parties (including pentesters that you pay) and introduces further complexity, ergo attack vectors. – K.Steff Feb 10 '17 at 15:31
  • 2
    @K.Steff *"cryptography relies on obscurity"*? Did you mean to write "steganography"? – ypercubeᵀᴹ Feb 11 '17 at 21:26
  • 1
    @K.Steff Every government that uses classified cryptographic algorithms uses "security by obscurity" in cryptography. – mostlyinformed Feb 12 '17 at 02:11
  • 4
    +1 for the mention of developers. When I make a change to a system and suddenly get a 404 where I previously got a 200 suddenly I have a game of whack-a-mole identifying where the decision was made to send a different response. All I can say to OP is if you decide to do this please please please document it properly or your name will be mud some time in the future – Darren H Feb 12 '17 at 19:16
  • 1
    I think this is a very practical answer. It makes me angry when someone puts down security by obscurity because they read that it was bad on a blog or something. In the real world, it definitely can be useful. It just can't be the only trick in your bag, that's all. – corsiKa Feb 13 '17 at 01:01
  • @ypercubeᵀᴹ Now that I read my comment, it does sound like I'm saying that, while I meant the exact opposite - in cryptography, Kerckhoffs' principle is essentially the ideological opposite of 'Security by obscurity'. – K.Steff Feb 13 '17 at 14:50
61

It won't actually slow down an attacker any appreciable amount, but will cause any future developers who work on your platform to be really annoyed at you. It may also cause certain nice features of your HTTP request libraries to not be so nice, as they're operating off of incorrect information.

This is a very weak form of security through obscurity. When designing a system like this, you should be thinking about slowing down an attacker by hundreds of years, not tens of minutes - otherwise you're still going to lose.

Xiong Chiamiov
  • 9,384
  • 2
  • 34
  • 76
  • 15
    Harsh, but just. I like it :) . It puts the nature of the weakness into perspective. – J.A.K. Feb 08 '17 at 18:37
  • This question is about a scenario where security by obscurity is the only possibility. Your first paragraph is good, but your second one is irrelevant. – Nacht Feb 08 '17 at 22:09
  • @Nacht how is the second one irrelevant? It's very relevant. He's saying that if you're going to use security through obscurity, it better be damn good obscurity. He's saying this particular defence isn't going to slow anyone sophisticated down by very much at all – Cruncher Feb 09 '17 at 14:28
  • @Cruncher, no, security by obscurity is never going to slow anyone down hundreds of years. He's saying you need "proper" security, which is impossible with the OP's current design. – Nacht Feb 09 '17 at 22:30
  • @Nacht "security by obscurity is the only possibility" i.e. "no security is the only possibility" (hyperbole alert). If you can't secure a system and you need it secure, don't implement it. Same goes for business models, like ads. – K.Steff Feb 10 '17 at 15:23
  • 1
    "When designing a system like this, you should be thinking about slowing down an attacker by hundreds of years, not tens of minutes - otherwise you're still going to lose." Against a dedicated, determined, skilled attacker? Yes. But to defeat automated attack programs or human attackers just looking for the easiest & most vulnerable targets to take advantage of? No. They will likely move right along. And that, BTW, will in turn often provide you with better visibility to spot more intent threats as they face defeating the best "real" security measures you have the ability to deploy. – mostlyinformed Feb 12 '17 at 02:34
  • I think this REALLY depends on strong assumptions about the attacker. Certainly it won't slow down a skilled attacker specifically targeting the site, but it can thwart automated scanning attempts or slow attackers who might end up confused by the response code. So it doesn't *necessarily* add security, but it can improve things. Question is if it's worth the time to implement anything that might slow down a less determined attacker. – Kat Feb 14 '17 at 18:30
  • 1
    @mostltinformed OP specifically mentioned a malicious user using curl or some other method to make (unauthorized) requests. The attacker has already reverse-engineered the iOS app to obtain a key necessary to perform the attack amongst other things. It's not going to be stopped by a minor inconvenience regarding return codes. – GnP Feb 14 '17 at 18:52
10

Rather than return a "401 Unauthorized," why not return e.g. "305 Use Proxy," i.e. purposely being confusing.

Yes it will confuse an attacker. But for a trained one, it might not be for more than two seconds flat. And status codes are not all that useful, mainly just when brute-forcing file names.

Say i have a valid key, and i can observe you returning 200-range codes for my authentication. If i change a bit in my key, and for every request you either return 305s, i will immediately think "Hmm. Seems like the dev might've messed up". If you return random codes, i'll know it was on purpose and i just ignore them.

the difficulty in reverse-engineering an iOS app will hopefully deter all but the most determined hackers

Yes it will, but since it only takes one to publish it, it's again just slowing it down..

J.A.K.
  • 4,793
  • 13
  • 30
  • 2
    A long time ago I wrote some code that launches an attack on anybody who tries this kind of thing. I ended up learning it's not the wisest of ideas. Far more sane might be bad request -> IP banned for 100 hours at the firewall level. – Joshua Feb 11 '17 at 05:26
6

This is security through obscurity, which is to say it does not provide much security at all. The solution you are suggesting will only slow down an attacker, not prevent them from using their own client. In fact, your method of encrypting the requests, depending on your implementation, may actually make your application less secure by opening up attacks on other parts of the crypto system. Your effort would better be spent trying to secure the api functions themselves (i.e. pen-test them and secure them against attacks like Sql injection), rather than attempting to prevent unauthorized clients from accessing them.

Dan Landberg
  • 3,312
  • 12
  • 17
  • I'm confused though. Everyone is saying that OP needs real security and NOBODY has mentioned how to do that yet. The problem is they don't want the end point to be called unless an ad was watched. How do they accomplish this? – Cruncher Feb 10 '17 at 11:31
  • That is an entirely different question from the original one, and also still difficult. You would need some server side mechanism to determine if the user has really and truly watched the ad. Maybe you send a digitally signed token which includes the timestamp when the request for the ad was received, and length of the advertisement. Then, that token would have to be resubmitted and validated when the user wants to request the protected endpoint. I would research DRM solutions. Rolling this one on your own without cryptography experience would be difficult to do securely.. – Dan Landberg Feb 10 '17 at 22:49
4

But the user IS already using your protocol.

Your problem is that your server’s interface of what the user can do is not secure!
You decide what data your server sends out to whom!
(Hello, dear online newspapers. Yes, I’m looking at you!)

Design it, assuming that the user is the client. Not your code. Because he is. It should not matter what client is used.
Your app runs on a CPU that is under hardware control of the user. Your code is just a list of commmands/data, and the user and his CPU can process it however they please. Including not processing it.
He decides what his CPU does. Don’t mistake his grace of accepting your app code as-is for a right to blind execution. You’re the one who’s trusted here, and that trust is very fleeting.
Especially with sleazy tactics like this.

In any case: You hand the user the encryption key and everything, and expect him to not use it himself, because you put it somewhere in your basket of code. … Just like DRM, that’s snake oil and can never work.
It takes only one person to find where you put the key. (That would be me, for example.) Everyone else just has to google for it.

But I’m surprised that you only think about encrypting the protocol against the user, instead of for his protection from man-in-the-middle attacks.
Assuming the reason this is usually done (Yes, I’m talking to you “content industry” again.): If your user is your enemy, maybe you should look for a business model that is based on fairness and a win-win, instead of ripping the user off and having to deal with backlash.

P.S.: Ignore all the “security through obscurity” answers. This is a fallacy that results in correct behavior but is still based on invalid assumptions. Using it as an argument, is, at best, amateurish and not really competent.
In reality, all security is through obscurity. Some is just more obscure (= better disguised). The actual reason this is bad, is because what we call real security is a bazillion times more obscure, giving it an actual (statistical) trustworthily high obscurity, as opposed to very simple obscurity that is just waay too likely for someone to come up with from nothing.

Evi1M4chine
  • 146
  • 6
  • Interesting point of view regarding "all security is through obscurity". If something is impossible to crack in a human lifetime, isn't it secure ? :) – niilzon Feb 09 '17 at 10:48
  • 1
    The *obscurity* of the phrase *security through obscurity* refers specifically to obscurity of the design of the system as opposed to key material. It's a way of referring to Kerchov's principle. – Peter Taylor Feb 09 '17 at 12:20
  • 7
    Your claim that "security by obscurity" isn't a useful concept isn't really valid. I guess your point is that having, e.g., a secret key is just security by keeping the key obscure. However, if I find out your secret key, you can just replace it with a new key, get back up and running and try to be more careful with your new key. If I find out your secret algorithm, then you have to build a whole new system, which is much more work. Also, you can give much better bounds on how long it will take me to figure out your key, versus how long it will take me to figure out your algorithm. – David Richerby Feb 09 '17 at 13:14
  • 3
    @DavidRicherby I want to upvote this comment 100 times. It's not useful at all to bucket all security into the same bin with differing degrees. We invented the terminology "security through obscurity" for a *very* good reason. – Cruncher Feb 09 '17 at 14:36
  • @DavidRicherby: You’re repeating my very point as if it as a counter-argument. It is my point, that all security only differs by the amount of work it takes to crack/fix (aka obscurity), and *therefore* there is no special magical divide with “not obscurity-based” “real security”. – Evi1M4chine Feb 17 '17 at 17:53
  • @Evi1M4chine No I'm not. You're saying that all security is, ultimately, security by keeping something obscure. I'm saying that we don't use the phrase "security by obscurity" to mean "security by keeping something obscure." It's like claiming that no system is secure because anything can be hacked by a sufficiently determined, sufficiently powerful opponent and, therefore, the word "security" is useless because nothing is truly secure. That's not what the word "security" is used to mean, and the thin you're talking about isn't what "security by obscurity" is used to mean. – David Richerby Feb 17 '17 at 18:01
  • @Cruncher: A good sign for when people believe stronger in something because they understand less of it, is when they use emotional arguments like “for a *very* good reason”, but don’t back it up. If it wasn’t a belief, they’d simply state that reason instead of a worn phase. An unfortunately very common thing among humans. Just like argument reflection, selective interpretation, straw-man arguments, taking everything as personal and insulting, etc. – Evi1M4chine Feb 17 '17 at 18:01
  • 2
    @Evi1M4chine "For a very good [but unstated] reason" isn't an emotional argument. But it is argument by obscurity. ;-) – David Richerby Feb 17 '17 at 18:02
  • @DavidRicherby: Are you telling me what *I* said?? ^^ Dude, I’m agreeing with you! (Or rather you with me.) What more do you want? Relax! And please don’t “interpret” things I didn’t say. I didn’t say that the term “security” is useless. And yes, there is NO 100% security. Ever. **It is all just levels** of obscurity. You can conveniently pick an arbitrary definition of “secure”, but then you’re off into fantasy land. – Evi1M4chine Feb 17 '17 at 18:07
  • 1
    @Evi1M4chine No, I'm not agreeing with you. You said that the concept of security by obscurity is "a fallacy". I said that it is not. It is an extremely useful concept. – David Richerby Feb 17 '17 at 18:10
  • P.S.: This is another example of why text is a bad medium for human conversation. All the misunderstandings. – Evi1M4chine Feb 17 '17 at 18:11
  • @DavidRicherby: And 1. why are you making arguments against that then, and 2. when will you start making arguments to back it up. :) /me smells some trolling – Evi1M4chine Feb 17 '17 at 18:13
  • 1
    @Evi1M4chine For someone that chastised "emotional" arguments, you're sure being very emotional. The "very good" reason was actually more of an appeal to authority more than anything. Nothing emotional about it. I also didn't even respond to you directly. I was just reinforcing David's point. We're not in a debate here. The fact that my argument was incomplete does not mean that it's invalid or unfounded. – Cruncher Feb 17 '17 at 18:17
3

As others already explained, security by obscurity slows down an attacker at best.

In your specific case, I would say it will have no appreciable effect. To get to this point, your attacker already had to reverse engineer your App and extract the private key. That means your attacker is not an idiot and knows what he's doing. A little bit of obscurity will cost him less time than it takes you to implement it.

Tom
  • 10,124
  • 18
  • 51
  • Well technically this is to try to HIDE the fact that they need to get the private key from the app. That is, this happens first. However, anyone able to get the key from the app, will not get held up by this at ALL – Cruncher Feb 09 '17 at 14:30
  • 2
    Any non-idiot attacker would observe the legitimate traffic first and would immediately see that the requests are encrypted. It would literally be the first thing that he notices. – Tom Feb 10 '17 at 08:58
2

As others have already mentioned, you're proposing to use Security by Obscurity. While this technique does have its purpose, consider the following before choosing to take this approach:

  • Providing tech support for your API. Using deceptive HTTP response codes make it difficult for anyone other than you to provide support. They have to consider whether the particular situation is actually sending the proper response code or if it is sending an obscure one. Should you decide to remain the sole contact for any support requests, this shouldn't be an issue.
  • What is a "Malicious User"? Use caution when categorizing a request as malicious for it can have adverse effects. Suppose an IP is determined to be sending malicious traffic and countermeasures are used. Now suppose that same IP is actually a proxy with hundreds or thousands of users behind it. You've now applied your countermeasure to all of them. This same principal can be applied to identifying malicious activity in headers and/or body.
  • Application code is slowest. The request has to traverse the entire stack to finally get to the "security" logic. This architecture doesn't scale well and is slow. Bad requests should be stopped as early as possible which is the premise for using Web Application Firewalls (WAF).
  • Extending access. Should your API become accessible to more clients then it was originally designed for, those new clients will need to be aware of the possible use of deceptive HTTP response codes.
  • Persistent malicious users. As others have mentioned, this technique is only a speed-bump.

Better Approach

Focus your time on white-listing all known good requests. This is so much easier then trying to identify all potential bad requests. Anything not appearing on the white-list should immediately get an appropriate response code such as HTTP 400 or HTTP 405 if you're filtering on HTTP verbs (as examples). Preferably this happens before the request hits your application code.

Along with white-listing allowed requests, ensure your application is secured based on OWASP Guidelines. You'll get far better results spending your time with OWASP then you will trying to determine what a malicious user is and returning an obscure HTTP response code.

user2320464
  • 1,802
  • 1
  • 15
  • 18
1

Fundamentally, you CANNOT prevent unauthorized clients from sending requests. The nearest thing possible would be to have some kind of cryptographic check done in the client, but as Sherlock Holmes said, "what one can invent, another can discover". (I know this for certain, because I have cracked people's client-side security on a number of occasions.)

Instead, build your API such that anyone is allowed to use it using custom clients. Make it friendly. What do you lose by that? Attackers will attack no matter what you do, and if you make your API easy to use, someone will come up with something you never thought of, and make your server even greater than you ever imagined it could be. What could be done by a client that talks to both your API and some other? What myriad possibilities are there, and will it really hurt you to allow them?

rosuav
  • 239
  • 1
  • 3
1

A good reason for not doing this, especially for a mobile app, is that it's very likely that your app is talking to your server via multiple proxies, for example in the phone company, already. Most large enterprises use proxies on their network to try and protect their systems from malicious content, and many phone companies reduce the quality of video or images in transit. It'll also make using the various system libraries for HTTP on your platform useless to you. One approach that's commonly used is to derive some key or token from information the user is unlikely to want to share such as hashing their name address and credit card details. This way even if your app is hacked, people are going to be wary of giving the required information to a random program they downloaded limiting the share ability of any proven attack.

james
  • 11
  • 1
1

Generally it's not a good idea.

I made very targetted use of this once to good effect when a client's website was being used by a stolen credit-card laundering ring. There were some identifying features shared only by the fraudulent transactions so rather than politely refuse them I would have the site delay the response by a couple of minutes (nobody likes a website that takes minutes to do something) and then return a 500 with the site's standard "sorry, it's not you, it's me" message for server errors (it also logged details for passing on to law enforcement). We had three attempts at transactions that got this playing possum response and then never heard from them again.

That though was:

  1. In response to something we knew was an attack rather than being obnoxious to users having a problem.
  2. Not a defence against an attack on the security of the protocol itself, but at the human level above that.
  3. Explicable through other explanations, i.e. we were pretending to have a really sucky website in a situation where that was plausible (there are a lot of sucky websites out there) and we weren't a specific target (the perps would move on to someone else).

Users having a problem should be helped, not abused. "I can't do that because you aren't correctly authorised" is refusing to do something in a helpful way. "I can't do that because you need to use a proxy" when someone doesn't need to use a proxy is being abusive. Being deliberately unhelpful is only appropriate when you know you're being attacked and then it shouldn't look like an obviously bogus message or you haven't hidden anything (potentially you've actually revealed something if the same bogus status isn't used for every client error).

That said, it is appropriate to take measures to have statuses not leak information. E.g. it's acceptable (an described as such in the standards) for /admin/ to 404 though it would 200 in another case (authorised user, allowed client IP addresses, etc.) Alternatively if /members/my-profile will 200 for an authorised user and 403 or 401 otherwise, then if /members/fdsasafdasfwefaxc would 404 for an authorised user it's a good idea for it to 403 or 401 for the unauthorised user. This isn't so much security by obscurity as considering which URIs relate to resources to be one of the bits of information that is being protected.

Jon Hanna
  • 269
  • 1
  • 5
1

I wouldn't do that. It generally means you are creating your own standard, which causes:

  1. your API to be predictive after making a map of your http responses to their actual meaning. Changing HTTP responses might work on some "hackers" who would give up after a few attempts, but it won't on others, more determined ones. Do you want to protect your API from former type only, i.e. 11-year-olds? I don't think so.

  2. quite some confusion for developers and probably testers, especially the ones who are used to operate according to global standards.

  3. Pick some other way. There definitely are a few. I've been struggling with the same headscratcher myself for a few weeks lately and came up with at least 2 more or less reliable ways to achieve desired restrictions [cannot post them here though].

There definitely are better and MUCH more effective ways to get what you want. Try to look at your API from a different angle... If you were a hacker and your life depended on hacking that API -- what would you do to succeed? What should happen for you to be frustrated in your attempts to break it?

schroeder
  • 123,438
  • 55
  • 284
  • 319
netikras
  • 111
  • 1