2

In a system with a complex set of computed authorizations, does conveniently allowing a given user access to view all of their own authorizations decrease security?

In a "Policy as Code" system which relies on consumers of its API to develop their own integrations, it seems like a wise idea to allow convenient viewing of ALL of user authorizations, because a given user can request access more easily and take advantage of "code as documentation", rather than pestering InfoSec for the state of their authorizations on an as-needed basis.

To block access to a comprehensive list of a user's own computed authorizations seems to me like a matter of "security through obscurity", since users can likely explore the system to find out what they can and cannot access.

This came up in a discussion I had with a coworker on the subject of this Vault mailing list post about Vault Policy viewing:

Inspect your own token's policies?

But it applies to a lot of other things. Anyway, I'm asking this question because I've been wrong before, I think it's premature for me to declare that "it's just obscurity":

How do I tell whether allowing a user to easily view ALL of his own authorizations will increase vulnerability?

Nathan Basanese
  • 640
  • 1
  • 9
  • 20
  • 2
    I can think of a case where a user account/object is authorised to access something that the person shouldn't know about or try to access. Not "obscurity" but not disclosing the infrastructure. I can also imagine a case that the list of authorisations might result in load on the helpdesk and users ask "what's that?" – schroeder Nov 27 '18 at 20:53

4 Answers4

3

Kerckhoff's principle applies here: it should not.

In theory, the list of user's privilege should match what the user is authorised to do. In practice, some oversight may cause you to grant a user more privilege than they really need, and disclosing authorisation information just makes it easier for them to figure that out.

The only drawback here is if the user is actually granted more access than they needed. Then the authorisation information may tip off the user that they have more privileges than they should have. Ultimately though, this isn't the problem caused by disclosing authorisation information, but rather the token's privilege was too broad to begin with.

Lie Ryan
  • 31,089
  • 6
  • 68
  • 93
2

You have asked a high-level question, so I'll provide a high-level answer.

Don't think about the problem in terms of "decreasing security" (which is undefined at best), but think in terms of "vulnerability" and "hazard".

  • What vulnerabilities (system and control weaknesses) are exposed through the disclosure of the information?
  • What hazards (unsafe conditions that could result in accidents) are created through the disclosure of the information?

To answer this, you do not create a threat model, but you create a vulnerability model. Where is your system vulnerable and where can hazards be created through the use or mis-use of the system?

If the disclosure of the information does not impact the vulnerable points of your system and if the information cannot be used to create a hazard, or if the impacts and likelihood are low enough to be tolerated, then you have your risk-based answer.

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • // , Hm. Thanks for the vocabulary & method tips @schroeder. I'll reframe my question accordingly, and perhaps include an answer showing how this way of assessing the hazards and vulnerabilities applies to the specific example of limited disclosure that I presented. – Nathan Basanese Nov 28 '18 at 20:55
1

I think the most accurate answer depends on the application. Authorization is a nuanced beast, and proper access control requires analysis of the underlying system.

That being said, my personal opinion is that while it may or may not be the best UX or provide the clearest UI, exposing authorizations to users would probably* not compromise security. If a user role can do something, I think someone with that role will probably figure it out eventually.

*the caveat is important here - again, the actual answer depends on the application. This is just speculation and very well might be wrong (once again, depending upon the nature of your application). YMMV, GLHF, etc. etc. etc.

securityOrange
  • 913
  • 4
  • 12
  • 1
    // , Sheesh half this answer is disclaimers. Got any ideas on what sort of info would mean it's a good idea vs a bad idea? – Nathan Basanese Nov 28 '18 at 05:25
  • // , That is to say, if it depends on the application, then on what aspects of the application does it indeed depend? – Nathan Basanese Nov 28 '18 at 06:07
  • 2
    @NathanBasanese as I mention in my comments, it entirely depends on how vulnerable the system or the information is to this form of disclosure. That's why all the disclaimers. – schroeder Nov 28 '18 at 10:48
  • Exactly. The aspects of the application it depends are really...what the application is, what it does, and how it’s built. It might not be the answer you’re looking for @NathanBasanese, but in reality none of us out here in Internet Land are really able to give definitive, useful information of the type you’re requesting given this much context. So the disclaimers and tautologies are accurate: if it’s vulnerable it’s vulnerable, and it may or may not be vulnerable. If deeper insight is required, your system probably holds more answers here than we do. – securityOrange Nov 29 '18 at 03:17
  • // , Well, I've entered the fray, let me know what you think of my answer. – Nathan Basanese Jan 09 '19 at 23:30
0

As already mentioned in Lie Ryan's answer, Kerckhoff's principle applies here. Granting knowledge, to a user of his resources on a system should not increase vulnerability of that system. If you think it does, you have some bigger problems to consider first.

The first thing to do with your system: Assess whether giving knowledge of how that system works increases hazard in the first place.

If it does increase hazards, you may have bigger problems than simply whether a user can view their own authorizations, and need to lock down any other knowledge of that system.

Furthermore, is hiding a user's authorizations is a condition of reducing your system's vulnerability? If so, you are setting yourself up for failure.

This is because, in a properly available system, a user can pretty easily discover their capabilities over time, simply by using the system in question, and recording what they can and cannot do. This applies even if the system is a method of troop deployment because, hey, your users are going to have to, well, use it.

In fact, a user's knowledge of what's available to him is often a condition of availability in the first place.

And an unavailable system has failed already.

Back to Kerckhoff's principle, here are some questions to check whether you're in a "security through obscurity" type of situation are:

  1. Is the related code or specification for your system open-source?

  2. Do others using similar systems also restrict this information

  3. If not, has it lead to increased hazards for them?

  4. What scenarios can you think of where allowing this info to all users would make the system vulnerable? And does restricting the information actually reduce vulnerability (compared to alternatives)?

Here is a worked example, using 2 of my Blue Team security tools of choice, KeePassXC and HashiCorp Vault:

  1. Yes KeePassXC, HashiCorp Vault

  2. KeePassXC: No, because the authorization is limited to which KeePassXC DBs to which you have passwords and keys. HashiCorp Vault: No

  3. KeePassXC: Nope. If you do not name your DB after any of your passwords, you're golden. HashiCorp Vault: No, as long as you don't name the stuff that's used in the authorization method (e.g. KV store names) to hold secret data Again, there is some responsibility on the user, here. If you respect the boundaries, and don't name your DBs and your KVs after usernames or passwords, this does not introduce a new vulnerability.

  4. Here is a scenario where vulnerability is increased by a user's knowledge of his own access:

A. Where the user already has privileged access he should not, and simply wouldn't know it otherwise.

In this case, it would be prudent to restrict the user's knowledge. But a better alternative is to just reduce the access levels in the first place, rather than just restricting his knowledge of his own access, because he may discover his available resources accidentally anyway. If you're in a situation where you've named your KeePassXC DB or HashiCorp Vault KV key names after something privileged, they both have ways to move that information further down to where it's not exposed to the authorization system. For the example in Vault's KV store, just change the Key names to something generic, and copy the privileged information to secure values of those Keys.

Luc
  • 31,973
  • 8
  • 71
  • 135
Nathan Basanese
  • 640
  • 1
  • 9
  • 20
  • // , Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte. – Nathan Basanese Jan 09 '19 at 23:25
  • We have asked you several times to stop prefacing all your posts with symbols. You have said that you would stop. Please stop. – schroeder Jan 15 '19 at 09:31