108

So, I keep finding the conventional wisdom that 'security through obscurity is no security at all', but I'm having the (perhaps stupid) problem of being unable to tell exactly when something is 'good security' and when something is just 'obscure'.

I checked other questions relating tangentially to this, and was unable to figure out the precise difference.

For example: Someone said using SSH on a nonstandard port counts as security through obscurity. You're just counting on the other person to not check for that. However, all SSH is doing is obscuring information. It relies on the hope that an attacker won't think to guess the correct cryptographic key.

Now, I know the first circumstance (that someone would think to check nonstandard ports for a particular service) is far more likely than the second (that someone would randomly guess a cryptographic key), but is likelihood really the entire difference?

And, if so, how am I (an infosec n00b, if that isn't already abundantly clear) supposed to be able to tell the good (i.e. what's worth implementing) from the bad (what isn't)?

Obviously, encryption schemes which have been proven to be vulnerable shouldn't be used, so sometimes it's more clear than others, but what I'm struggling with is how I know where the conventional wisdom does and doesn't apply.

Because, at first blush, it's perfectly clear, but when I actually try to extrapolate a hard-and-fast, consistently applicable algorithm for vetting ideas, I run into problems.

root
  • 1,547
  • 3
  • 12
  • 20
  • 1
    See [Isn't a password a form of security through obscurity?](http://stackoverflow.com/q/4486171/632951) – Pacerier Feb 16 '15 at 07:52

15 Answers15

90

The misconception that you're having is that security through obscurity is bad. It's actually not, security only through obscurity is terrible.

Put it this way. You want your system to be complete secure if someone knew the full workings of it, apart from the key secret component that you control. Cryptography is a perfect example of this. If you are relying on them 'not seeing your algorithm' by using something like a ROT13 cipher it's terrible. On the flip side if they can see exactly the algorithm used yet still cannot practically do anything we see the ideal security situation.

The thing to realize is that you never want to count on obscurity but it certainly never hurts. Should I password protect / use keys for my SSH connection? Absolutely. Should I rely on changing the server from 22 to port 2222 to keep my connection safe? Absolutely not. Is it bad to change my SSH server to port 2222 while also using a password? No, if anything this is the best solution. Changing ("Obscuring") the port will simply cut down on a heap of automatic exploit scanners searching normal ports. We gain a security advantage through obscurity which is good, but we are not counting on the obscurity. If they found it they still need to crack the password.

TL;DR - Only counting on obscurity is bad. You want your system to be secure with the attacker knowing it's complete workings apart from specifically controllable secret information (i.e. passwords). Obscurity in itself however isn't bad, and can actually be a good thing.

Edit: To more precisely answer your probability question, yes in a way you could look at it like that, yet do so appreciating the differences. Ports range from 1-65535 and can be quickly checked within 1 minute with a scanner like nmap. "Guessing" a random say 10 digit password of all ascii characters is 1 / 1.8446744e+19 and would take 5.8 million years guessing 100,000 passwords a second.

Edit 2: To address the comment below. Keys can be generated with sufficient entropy to be considered truly random (https://www.rfc-editor.org/rfc/rfc4086). If not it's a flaw with the implementation rather than the philosophy. You're correct in saying that everything relies on attackers not knowing information (passwords) and the dictionary definition of obscurity is "The state of being unknown", so you can correctly say that everything is counting on a level of obscurity.

Once more though the worth comes down to the practical security given the information you're able to control remaining unknown. Keys, be it passwords or certificates etc, are (relatively) simple to maintain secret. Algorithms and other easy to check methods are hard to keep secret. "Is it worth while" comes down to determining what is possible to keep unknown, and judging the possibility of compromise based off that unknown information.

Peleus
  • 3,827
  • 2
  • 18
  • 20
  • 8
    `but it certainly never hurts` **Absolutely wrong**. Everything that you change, every bit of complexity you add to a system, incurs the risk that it will introduce a vulnerability. In fact the very example you used introduces a vulnerability. Running SSH on a high-port _hurts_. Ports above 1024 can be bound to without root, allowing an attacker to impersonate an SSH server, but if, and only if, you implemented your "harmless" obscurity measure. Of course, using a non-standard low-port doesn't hurt, but that's clearly something you didn't take into account! – forest Jan 28 '18 at 23:15
  • 2
    @forest additionally, if you 'hide' things from automated scanners, you include vulnerability scanners that you run yourself; e.g. if a 'Headline News' SSH Vulnerability is discovered, you risk failing to identify systems to patch. You risk your services being blocked by over-zealous security people on both sides (a security risk in itself). And obscuring cryptographic algorithms risks you breaking things too, it's dangerously close to rolling your own security. ("Let's break up your password into two 6-character chunks and encrypt them separately", might ring some bells!) – JeffUK Jun 04 '18 at 13:11
42

Secrets are hard to keep secret. The larger a secret is and the more people that know it, the sooner it is likely to leak.

Good secrets are:

  • Small.
  • Known only by one person.
  • Easy to change.

When we accuse someone of security through obscurity what we are really saying as that we think their secret could be smaller, known by fewer people and/or easier to change.

Algorithms, port numbers and shared passwords all fail the second and third points above. Algorithms also fail the first point.

The distinction between when something is an appropriate secret and just obscure is whether we know of a way of achieving the same level of security with a smaller secret that is easier to change and is known by fewer people.


I disagree with the assertion that additional obscurity never hurts.

In the case of SSH port numbers, there is a small amount of extra time required to type in -p 1234 every time you use SSH. This is only a second or two, but with the number of times I use SSH, this would end up significant. There's the overhead of remembering that this client is slightly different and of teaching new hires the same. There's the case where you forget that this client is on a strange port and waste minutes looking at firewall configs and uptime monitors trying to figure out why you can't connect.

Since port numbers are so easy to discover with a port scan, you will also have to implement an IPS that detects port scan and prevents the correct port from responding when it is checked or implement something like port-knocking. Both these methods can be overcome and don't add anything more than more obscurity, but they take up your time playing cat-and-mouse with your attacker.

The same amount of time spent switching root logins and passwords off and switching to keys will have a better impact on security. Wasting time on obscuring details takes away from real security measures.

In the case of a secret algorithm, the algorithm misses out on the additional scrutiny that many security researchers can provide. The obscurity (or secrecy) of the algorithm is likely causing it to be less secure.

Ladadadada
  • 5,163
  • 1
  • 24
  • 41
  • 4
    I don't mean to sound defensive, but would you argue the overhead of a different port really is significant? For a simple port change the avoid the vast majority of automated exploit scanners looking for misconfigurations. I'd say -p 2222 is not a significant overhead in any real sense. – Peleus Mar 06 '13 at 09:31
  • 1
    I wouldn't argue that vehemently or for very long. I estimate it would add up to *minutes per day* in my situation, but my day job is a sysadmin for a medium-sized infrastructure where using SSH is very common. Larger *or* smaller would involve less actual use of SSH. I *would* argue at length that the same effort could better spent. – Ladadadada Mar 06 '13 at 09:39
  • 2
    If you're incapable of editing your .ssh/config to add a port other than the regular, then I'd hesitate to take any security advice from you. – Fake51 Mar 06 '13 at 10:35
  • 4
    @Ladadadada has a completely valid point. Although changing the default SSH port is a common practice to prevent automated tools, it has drawbacks. For example, inward connections to ports other than 22 may be blocked by a client-sided firewall in another corporate, etc. Apart from that, automated tools can easily be blocked out via a simple IP ban after 3 login attempts. It's not adding a new port that's hard, you need to get every employee (internal and external) familiar to it, and all this effort doesn't seem worth it. – Rohan Durve Mar 06 '13 at 10:45
  • 3
    In very high security situation, point 2 is not strictly speaking true, there are situations where nobody knows the whole secret, each person only knowing parts of the secret, and they all had to be present to unlock the secret. – Lie Ryan Mar 06 '13 at 11:05
  • 1
    @Ladadadada, I _might_ agree with the general idea of overhead, but your example (excuse me) is ridiculous. The overhead here is a product of your own failure to optimize your tasks. I have a small one-line shell script that I double-click every time I want to setup my SSH tunnel (which uses an unusual port number, brute-force protection, and key authentication). – Adi Mar 06 '13 at 12:38
  • 4
    @Adnan I've already mentioned that I won't be arguing strongly that it is a *large* overhead, (because it isn't) but you have focused on only one aspect of having SSH on a non-standard port and I mentioned several. You also seem to want to argue about something that has nothing to do with security. *Any* time wasted on dealing with the non-standard port (including adding lines to `~/.ssh/config` and writing scripts to login) is time you could have spent on adding real security. – Ladadadada Mar 06 '13 at 13:04
  • 6
    @Ladadadada, if you don't want your opinions to be criticized then don't post them on SE. As a user for the service (SSH in this case) there's really much "real" security you can add. Changing the SSH port from the default 22 _is_ a good practice, my argument is that it prevents the large percentage of automated attacks. You say it's not good practice, your argument is that it wastes 10 seconds to write a shell script, and you argue that in 10 seconds you'll do something to impact the security of SSH. – Adi Mar 06 '13 at 13:25
  • 2
    Should I **only** change the default port and use `password` as your password? Of course not. But it certainly _better_ to do both. Plus, neither of us can have a final verdict in this, it all depends on the case. When changing the default port is inconvenient (requires modifications to a large number of clients) then it's not good. – Adi Mar 06 '13 at 13:29
  • 1
    +1: This is basically the more informed, intelligent, reasoned version of my snarky super-secret-crypto-port comment. – deworde Mar 06 '13 at 15:00
  • 3
    Well the `ssh -p 1234` argument _is_ invalid because we have .ssh/config file that lets you specify that `ssh foo` connects to server `bar` on `whatever` port, and using key `foo_rsa`. – jb. Mar 06 '13 at 21:13
  • 6
    My opinions are always up for criticism, but my argument is that **the assertion that "additional obscurity *never* hurts" is false**. The reason I won't argue about `-p 1234` is that it's not an important part of my actual argument. It's merely an example of obscurity hurting (however miniscule the hurt may be) and was only chosen because it's in the original question, not because it's a good example. If we assume that you're right and changing the SSH port *is* worth whatever time it takes, my argument is not weakened in the slightest. – Ladadadada Mar 07 '13 at 22:54
  • 1
    In a heated, futile argument Lie Ryan's comment went unnoticed, maybe you could include that in your nice answer? – techraf Jun 22 '16 at 06:11
21

Security by obscurity is where you rely upon some fact which you hope is not known to an attacker. A major problem with this is that once the fact is disclosed, the security scheme is rendered useless.

However, all SSH is doing is obscuring information. It relies on the hope that an attacker won't think to guess the correct cryptographic key.

When the phrase "Security by obscurity" is discussed, it often refers to the processes involved, rather than secret information. The thing about SSH is that as a process it has been heavily vetted to ensure that the only thing you need to keep secret is the cryptographic key. This is not possible in principle for the attacker to "think and guess", because the space in which cryptographic keys live is vast.

Bruce Schneier showed that in order to brute force a 256-bit AES key you would need at a minimum, to capture the entire sun's energy output for 32 years(!). It doesn't matter how fast your computer is. That's just an information theoretic result which holds regardless of the computer you use (quantum computing notwithstanding).

This is totally impractical with current technology. That's not to say that SSH uses AES, but it is one of the principles of good cryptography.

An example might be where a bug is discovered in a piece of software where a (trusted) user finds a specific input allows an authentication bypass. A poor manager might say "ah, but it's really unlikely that any untrusted users will ever discover that, why bother fixing it". This is security by obscurity.

pwaller
  • 311
  • 1
  • 4
  • 2
    The obscurity indeed refers to an procedure (algorithm), rather than a piece of shared knowledge (password). Often the reason why "security through obscurity" is considered bad, is because the opposite (public, known, tried, tested, proven algorithms) is considered good. People trying to implement an "obscure" algorithm will often invent their own, often overlooking certain things and creating huge vulnerabilities. – Konerak Mar 06 '13 at 15:19
  • *"Bruce Schneier showed that in order to brute force a 256-bit AES key you would need at a minimum, to capture the entire sun's energy output for 32 years(!)"* - Actually, that was to brute-force *(all combinations of)* a 192-bit key. To brute force a 256-bit key would require more energy than the entire sun will *ever* output. In fact, he shows that it would require the energy of around 137 billion supernovas. – BlueRaja - Danny Pflughoeft Mar 06 '13 at 17:23
  • (@BlueRaja-DannyPflughoeft, that's why I said *at a mininimum* ;-) – pwaller Mar 07 '13 at 08:45
5

It's been touched on in several other answers, but there are three pieces to this puzzle.

  1. Mechanisms
  2. Implementation/Configuration
  3. Data

An example of a mechanism would be AES, or SHA-1, or for your example, SSH.
An example of an implementation/configuration would be which port SSH is listening on, or which encryption algorithm you've chosen to encrypt your application's data. An example of data is a private key, or a password.

A mechanism should never be obscure. "It's safe because you don't know how it works" is not security. It should be able to be examined in minute detail without implementations being exploitable in the absence of secret data.

An implementation may or may not be obscured. Generally, it neither hurts nor helps security materially when you do this. You may see fewer port scans identifying your SSH port, or you may be able to hide the encryption algorithm used for a particular ciphertext, but for a secure mechanism without the secret data, it should not matter. The mechanism should still be unexploitable. There's a argument that there's a marginal security benefit here, and a marginal harm to usability. Your milage may vary.

Secret data should always be obscure. If someone gets a hold of your private keys or passwords, you revoke them, create new secret data, and vow to protect it better next time.

Xander
  • 35,525
  • 27
  • 113
  • 141
3

Security to obscurity applies to everything related to not fixing the particular weakness at the code / source level instead finding workaround to cover your holes. When that layer of protection is removed the vulnerability is out open to be exploited.

One such example is program-hooks which gives developers kind of covert means of connecting to applications in production environment. This is indeed a threat a security myth; but its quickly tarnished by someone who has enough knowledge to reverse engineer and sometimes just by sniffing the network.

Usually the main reason these threats escape into the wild when they are missed in SDLC phase of system/application design; then when it goes to production environment its just too much cost for the things to repair from that point forward. It is there workarounds or coverups starts to emerge.

Another example People writing their password on pieces of paper and putting it under their keyboard.

Also as a market factor you should know that such practices are normally followed by closed-source vendors / community ; for an open-source project this concept doesn't apply any practical purpose as the code is released to general public for review and just about anyone can address concerns through techniques such as code-reviews. The best and most reliable way of catching it.

Defeating the SSH security through obscurity concept practical examples

  1. Run nessus scan on targeted network would bring you the vulnerable services and mapped ports
  2. Run nmap scan on targeted network for open services.
Saladin
  • 1,547
  • 3
  • 14
  • 23
  • 1
    The examples make sense. It's obvious that writing your password on something and keeping it by the computer obviates the purpose of the password. But what about the case of changing service port? Demonstrably, that makes the thing more secure. If a flaw is found with SSH, it may put off when you'll be screwed by long enough that you can have the issue patched before someone finds and exploits it. For that kind of thing, at what point do you know 'alright, this is just ridiculous'. – root Mar 06 '13 at 08:14
  • As i said e.g applies to ssh as well that fixing the problem at source is required. Updating ssh crypt lib or whatever it takes, a change in port means if the attacker has sniffer deployed in your environment he can sniff the handshake and get to know the obscure port used by you – Saladin Mar 06 '13 at 08:20
  • I'll grant that the switching service port example is a bit weak. A different example, then; say you're using a method of encryption that, with the fastest computer currently on earth, would take 10 years to crack. It can be cracked by brute force, just not easily. You've obscured the information by doing the computational equivalent of putting a couch on top of it. But it's still gettable. Is that reasonable, given Moore's law? What if it would take 100 years? 1000 years? Assuming you don't know how long the information must be secured. – root Mar 06 '13 at 08:31
  • I guess my issue is that, from what I've found, what is and isn't obscure depends more on the attacker than on the policy itself. If your attacker is cleverer than the people who made the tools you're using to protect yourself/your employer/your client/your cat blog, their security is, from the attackers perspective, just so many computational couches to be moved. How do I know I've piled enough couches? Or, rather, how can I know the policy I thought of will be adequate even against someone smarter than me? I know no one can reverse subtraction, for example. But a given security policy? – root Mar 06 '13 at 08:36
  • You need to understand that all algorithm needs to have particular level of assurance for the system or formally called as target of evaluation (orange book). Many times these algorithm go through scrutiny as done by NIST, infact there is a complete science for evaluating algos. The catch is that only publicly disclosed algos can be tested by third-party. The possibility of reverse engineering and other threats is what the evaluation process checks on. When we are using e.g AES128 we are trusting the encryption standard. That trust comes from evaluation. Sometimes policy drives evaluation. – Saladin Mar 06 '13 at 09:01
  • Again, I must admit ignorance- I need to get around to reading the rainbow books. But, to make sure I'm correctly understanding what you're saying... There is a science to evaluating the security of algorithms when they're developed- not simply for encryption- and, though for everyday use the adage 'security through obscurity isn't security' serves as a useful quick self-check, there are far more rigorous definitions of secure enough, which unfortunately (but understandably) require entire books to enumerate. Fair analysis? – root Mar 06 '13 at 09:12
  • @root I suggest you read this link. http://csrc.nist.gov/groups/STM/cavp/index.html – Saladin Mar 06 '13 at 09:31
  • Duly noted. Looks like the kind of documentation a person could sink their teeth into. Luckily, I have a few free days coming up... Thank you. – root Mar 06 '13 at 09:36
  • @root I suggest you read this link. http://csrc.nist.gov/groups/STM/cavp/index.html The threats that applies to cryptographic algo are world different from what you calling as "security through obscurity" prescriptive check this link for more info. http://www.giac.org/cissp-papers/57.pdf. Use of public key arch is one such example of security through obscurity; only the private keys needs to be protected. – Saladin Mar 06 '13 at 09:37
2

Security through obscurity is no security is perhaps more accurately stated as "A security system is only as secure as it's secrets are hard to guess." Really, when you get down to it, encryption could be argued to be security through obscurity since the encryption key is obscure. The difference is that it is so obscure that it is mathematically infeasible to find and therefore secure.

In any secret based security system, you want the secret to be as limited as possible and as hard to guess as possible. The more complex a secret, the more likely there is to be a flaw in it. Also, limiting the amount that must be kept secret makes it easier to keep it secret.

The statement "security through obscurity isn't security" stems from the idea that many "clever" ideas are simply coming up with convoluted ways to do something to try and make it harder for an attacker to figure something out, but often one detail of those approaches will impact other details of other steps, so it is impossible to tell how hard it will be for an attacker with partial knowledge of a secret algorithm to determine the rest of the algorithm.

Keys on the other hand should be random, knowing a few bits of a cryptographic key for example shouldn't help you figure out the other bits in the key. Similarly, the difficulty in figuring out the key is fairly well understood. Since the relative security of the algorithm is not impacted significantly (or reliably quantifiable) by the secrecy of the algorithm, it doesn't add statistically significant security.

What does make a statistically significant impact in the security of an algorithm is any problems with the algorithm. In general, published algorithms have been much more thoroughly examined for any flaws that break them and thus will generally provide a higher confidence in the security they provide.

So in closing, most security does involve some level of obscurity, but the trick is to minimize the quantity and maximize the ease of protecting those secrets while also trying to ensure that there are not undetected flaws that will cause the system to misbehave and reveal the secrets.

AJ Henderson
  • 41,816
  • 5
  • 63
  • 110
1

In every encryption algorithm, at every login prompt 'security by obscurity' is a major component. It always relies on some kind of secret knowledge (with the exception of two-factor authentication).

The difference between good security and bad security is connected to the properties of the secret knowledge: Does it stay secret?

A bad example is a system where you can derive information about this secret from other channels. Let's say you invented your own encryption algorithm, for example "zip then XOR with your key". An attacker probes your system and might determine the compression algorithm from the time it takes your encryption scheme to encode different plain text messages. The attacker gained knowledge about your system, knows the internals of the zip algorithm and might be able to use this data to determine your key. From the outside this looks like a perfectly good algorithm, the compressed and xor'ed data will look pretty random but only pose a small challenge to a sophisticated attacker. Your key might be very long but that does not help you to distinguish between bad and good obscurity. You accidentally embedded a path to gain knowledge about your secret key into the algorithm:

The counterexample is RSA public key encryption. Here the secret key is a large prime number. The public key is the product of the secret key and another large prime number. Now even with the RSA algorithm well known I can give you my public key, you can encode whatever data you want with it but it does not leak any information about the secret key.

So important to distinguish good from bad security is the amount of time someone needs to access your data. In your specific example going from port 22 to 2222 do is another bit of information the attacker needs, so that is a security plus. As this is easy to figure out within a short time it only adds a very small amount but does not leak anything about your key. As this port scan is trivial and a one-time cost only the total amount of information necessary to get to know your secret key stays constant, that is why it is not considered to improve the total security, hence the common saying that 'security by obscurity' does not help.

Alexander
  • 224
  • 1
  • 4
1

"Obscurity" is about assumptions

I think "security by obscurity" is actually about faulty assumptions.

For example, if I naively use my own hand-rolled encryption, thinking "no one will know how to break it because it's unique":

  • I know it can be broken by someone who has the key.
  • I assume it can't be broken by other means. This is probably false.

If I use a proven encryption method:

  • I know it can be broken by someone who has the key.
  • I have good evidence that it can't be broken by other means.

I'm still relying on the "obscurity" of my key. But that's the only thing I have to protect.

So, to detect "security by obscurity", challenge assumptions. If someone says "nobody could guess or detect that that we're doing X", the correct response is how much proof do you have? The standard in security is very, very high.

Nathan Long
  • 2,624
  • 4
  • 21
  • 28
1

I'd formulate it this way:

'Security through obscurity' refers to a situation where an attacker is deliberately provided with all the means/information needed to break the security mechanism, hoping or assuming that the attacker will not spend the effort to reveal it.

Sometimes it can be observed that some program tries to achieve security by some 'automatic' encryption scheme, where in the end, in addition to the encryption algorithm, the encryption key is contained somewhere in the program itself. The program itself will need no further information to be able to decrypt its 'secret' data; and neither will any attacker.

'Real' security will try to make sure that an attacker will never have all the information needed to break it. When using encryption, it basically does not matter if the attacker has access to both the cipher message and the algorithm to create it as long as the encryption key is not disclosed to him. This way, he is denied critical information and cannot from the information he has simply bypass the security mechanism.

JimmyB
  • 228
  • 1
  • 6
0

You can use whatever you'd like for the "secret key," but ports make a poor choice; there are only 2^16 of them, they can be sniffed, and they are (usually) static for the duration of the connection.

However, they have been used as (part of) the secret key in the past, when there is no other good choice. In particular, randomizing the port was used to counter the Kaminsky DNS cache poisoning attack from a few years ago. Combined with randomizing the 16-bit query-ID, this gives us 32-bits of security, which is more than enough to protect us for the duration of a typical DNS query (~0.1 seconds). The secret key can still be sniffed, but it's considered "not a big deal" since DNS has always been vulnerable to MITM attacks anyways. C'est la vie.

So, whether your example is "security by obscurity" or not really depends on the context.

0

If you have a whole suite of protection mechanisms, you might say that any one of those protection mechanisms is only "security through obscurity", but it is more important to consider the security of the system as a whole.

Individually, a protection mechanism is considered "security through obscurity" if that protection mechanism depends on an attacker not expecting something, or if that protection mechanism depends on being unusual rather than cryptographically strong. In other words, putting SSH on port 2222 is security through obscurity because an attacker wouldn't expect it to be there (it wouldn't be their first guess), and because that isn't the normal port. However, protecting SSH with a high strength password is real security because it is intended to be cryptographically strong. Additionally, changing your username from "root" to something that can't be easily guessed is also real security because there is a measure of cryptographic strength there: if the attacker cannot figure out the username they cannot very well break into the system even if they get the password correct.

KyleM
  • 435
  • 1
  • 6
  • 13
0

By obscurity I understand that your security is based on the fact that hacker is not aware of your encryption algorithm. You just revert the characters in your words, like PASS -> SAPP, or do more complex obfuscation, and you can communicate as long as nobody spots the algorithm. If unveiling encryption algorithm breaks all your security, then it is obscurity, not security. Real security starts when hacker will not be able to decrypt your message, given the encryption and decryption algorithms.

Ali Ahmad
  • 4,784
  • 8
  • 35
  • 61
Val
  • 1
  • 1
  • 7
0

Security is done by authenticating that you are the intended recipient or user. There are 3 factors of authentication

  • Something the user knows (e.g., password, PIN, pattern)
  • Something the user has (e.g., ATM card, smart card)
  • Something the user is (e.g., biometric characteristic, such as a fingerprint)

Most security measures use single factor authentication. SSH for example requires either knowing a password or having a private key (i guess you could say that requiring a passphrase for the key would be two-factor authentication).

Two-Factor authentication is something that has been implemented by a lot of service providers and software lately. This requires any two of the factors listed above, usually a password and a phone number. With two-factor authentication an attacker can get access to either one of the security credentials and still not be able to access the secured system.

Security through obscurity is a zero-factor authentication. You are not required to know any secret, posses anything or be any particular person. Without authentication there is no authentication and therefor no real security.

Eric
  • 451
  • 2
  • 4
0

Obscurity can only hurt you if you think it provides you true security and adopt weak practices. However, obscurity can help slightly as it buys you time from unknown future vulnerabilities, if you do try to maintain best practices (strong passphrases, apply security updates, use security vetted algorithms and protocols, etc). Obscurity doesn't prevent an attack from someone who has targeted you if your system is vulnerable, but it also doesn't advertise the fact you are vulnerable to the entire world.

If you had a Ruby On Rails app and advertised that and happened to be away on vacation last January, people could have run arbitrary commands on your webserver. In fact the advertisement would let attackers find you much faster than if they had to randomly guess what sort of technology stack you were running and try every random website.

Or let's say there's a zero-day weakness is found in SSH; sort of like the Debian SSH key generation issue from a few years back. People will start randomly scanning to find ssh running on port 22 on random IP addresses and then run the exploit. Sure they could do a port scan first to search for ssh, but attackers will first look for the low lying fruit. A full search would make their scan more than 10000 times slower. Hopefully by then you've patched the issue. Most random IP addresses are not going to have ssh or anything else running on them, so it makes sense for attackers to stop scanning after port 22 (and maybe a couple others 222 and 2222 and 22222 as well). If your home server doesn't respond to pings and drops all packets to ports but other than 39325, they likely will move on before finding your ssh server. That's obscurity helping you. Yes, a network eavesdropper could trivially find your port listening in on one ssh connection. But for the vast majority of attackers who targeted you randomly, they won't have observed an ssh connection by listening in on your network traffic. Furthermore, even for those attackers, 99.9% of the time you expect that your ssh configuration is secure and has no vulnerabilities.

And for the extra hassle of typing in ssh -p 39325 -Y foologin@foo.subdomain.example.com, anyone who uses ssh frequently sets up an ~/.ssh/config file (along with authorized_keys and id_rsa.pub/id_rsa) so they can just type ssh foo to connect after typing their private keys passphrase. Now your config file remembers the full domain name, your user name, the port (if not 22), and any other flags. For ssh, I'll change ports if its internet facing and only I use it (my home machine, my VPS) as its a hassle to get everyone to use the same port as you. For internal multiple user stuff at work, I keep it firewalled to the outside internet and require outside access to pass through a VPN.

For the record, my VPS used to have ssh running on port 22 and got about ~10000/day bad authentication attempts (all with non-existent user names) recorded in the log files. In the latest three months of log files, I've gotten exactly zero running on a different port.

dr jimbob
  • 38,768
  • 8
  • 92
  • 161
0

I guess, that obscurity can be measured by comparing its security value to how much will it complicate usage of protected medium. Someone already mentioned, that changing SSH port will increase your security a little, but at the same time will complicate usage of the shell a lot, because you'll have to remember, which port it is on, teach all new employees of this security measure, and eventually it'll end up as a sticky note attached to user's displays or automatic scripts with the port hardcoded, thus nullifying its security value.

Similarly, you can obfuscate the source code to protect it. But if anyone accesses it, it's only a matter of time before he'll restore original meanings of functions and variables (for example by careful debugging) making the obfuscation pointless. On the other hand, you have to remember to run special program before compiling the code or (what's even worse, but I saw it) just work on code with weird naming convention.

In my opinion, you lose more than you gain by using obscurity and that's why security by obscurity is considered to be a bad practice.

Spook
  • 101
  • 2