23

First of all, I'm sorry if this has been discussed many times. I read many posts about PCI compliance but there are some small things I'm not quite sure about.

Suppose there is Mr. GoodGuy, an honest software developer. He develops the main software architecture, and the company trusts him and gives all the access he reasonably need. This software stores credit card numbers for recurring payment management, and software uses a credit card gateway to charge the renewal amount.

Mr. GoodGuy could write some code that would decrypt the card for a user, no matter what level of security the software has (encryption key in a secured server location, per-user keys, or anything), the software itself can somehow decrypt the card data. That means, even though the developer is honest, he could access card data.

  • What are the possible solutions that other companies have implemented that prevents someone from using the software to access card details?

This is not really about card details. It can be anything like online file storage services, medical data, or anything. How can a developer can make sure he won't be able to access the data as he wants, but make it possible for software to to access them (without user participation)

PS: I'm Mr. GoodGuy here and I have no intention do anything bad. I'm wondering how other companies deal with this. Do they trust the developers? Even if he's resigning, he can take the key file with him. Flushing all stored cards is not an option here either since it can send many existing sales off.

AKS
  • 714
  • 5
  • 13
  • 1
    Why do you assume he has access to the key file/master password/etc.? – Gene Gotimer Oct 30 '14 at 18:53
  • 1
    Because the software somehow has to decrypt the encrypted string back to charge the renewal fee. It's just the code, so he can simply run it whenever he wants (assume, for the sake of this question, code reviewers missed this and code was deployed to live). – AKS Oct 30 '14 at 18:55
  • I still don't see why you would give him access to the key. BTW, that is one answer. Don't let a developer have access to the key. Ever. – Gene Gotimer Oct 30 '14 at 19:13
  • 4
    I know the encryption keys must be kept away from everyone. But the developer has an advantage that he can write some code that runs in live environment and code itself gives him the card data. With version controlling and such, it's possible track him back, but there would be a disaster at the time others realize it. – AKS Oct 30 '14 at 19:17
  • 15
    Honor. All the technological controls my company has loaded would be to me as straw should I want to steal the customers' data, but I will not. – Joshua Oct 30 '14 at 23:08
  • 8
    I feel the need to warn you that if this *isn't* a hypothetical question and you plan to store credit card details yourself, you're in for a massive world of hurt. PCI DSS is a *nightmare* even for well-resourced, well-prepared teams of experts. I very strongly recommend not attempting it, and instead using a third-party solution where the card details don't pass through your systems. – Iain Galloway Oct 31 '14 at 13:35
  • 1
    If the developer steals this data, and acts on it, there will be an investigation from the authorities. The authorities will immediately suspect the developer. It will be hard for the developer to conceal the crime when actively investigated, and the mere suspicion can damage their career. So the benefit from stealing the data is too small to justify the risk (and losing the income of being paid however much for doing their job honestly). – Superbest Oct 31 '14 at 18:36
  • 2
    A short non-answer is that if in your company cannot afford multiple separate people for development, review of each development, production and auditing, then PCI DSS says that your company isn't allowed to store credit card data. A company with a single developer can't qualify for storing CC data no matter what else they do, they can only outsource it. – Peteris Nov 01 '14 at 09:57
  • 4
    I would go with the old story: `Locks are on doors only to keep honest people honest. One percent of people will always be honest and never steal. Another 1% will always be dishonest and always try to pick your lock and steal your television; locks won't do much to protect you from the hardened thieves, who can get into your house if they really want to. The purpose of locks, the locksmith said, is to protect you from the 98% of mostly honest people who might be tempted to try your door if it had no lock.` – Francisco Presencia Nov 01 '14 at 20:49

7 Answers7

31

PCI DSS sections 6, 7, and 8 all bear on this question.

For example, part of 6.3.2 which requires code review:

Code changes are reviewed by individuals other than the originating code author, and by individuals knowledgeable about code-review techniques and secure coding practices.

6.4 with change control:

A separation of duties between personnel assigned to the development/test environments and those assigned to the production environment.

7.1 controlling access... in many environments the developer who writes code never accesses the operational systems where it's used with live data:

Limit access to system components and cardholder data to only those individuals whose job requires such access.

And a touch of 8.7 to put restraints on those people with access:

Examine database and application configuration settings to verify that all user access to, user queries of, and user actions on (for example, move, copy, delete), the database are through programmatic methods only (for example, through stored procedures).

Now, that all said, can a trusted insider every be perfectly defended against? No, because of the very definition of "trusted". This is true in all places (how many spies have been "trusted"? John Anthony Walker comes to mind.) But there are best practices for defending against such a threat , for mitigating them, and the PCI DSS formalizes as requirements a number of these practices (for credit cards... other secrets are on their own!)

(And @Stephen-Touset points out, 3.5.2 requires:

Store secret and private keys used to encrypt/decrypt cardholder data in one (or more) of the following forms at all times:

And one of those ways is:

Within a secure cryptographic device (such as a host security module (HSM) or PTS-approved point-of-interaction device)

Which has the advantage of escrowing the actual key material away from day-to-day users and administrators.)

gowenfawr
  • 71,975
  • 17
  • 161
  • 198
  • Don't forget hardware security modules, which attempt to ensure that nobody, even with physical access to the hardware, can ever learn the secret keys. – Stephen Touset Oct 30 '14 at 19:07
  • Thanks for the answer. In my case, I'm the developer here and I was wondering if there is something I could do to lock myself out (my laptop getting stolen or someone steals my private keys). There is a startup that we are a small team so production servers are also managed by me. Thanks again for your details answer. – AKS Oct 30 '14 at 19:23
  • 9
    It's very difficult to do on a shoestring operation, because the key ingredient is separation - separation of duties, separation of access, separation of oversight. And a startup doesn't have the bodies to spare for separation. If your concern is other people impersonating you, consider enhanced authentication - 2-factor with time-based tokens, for example, they're unlikely to steal your laptop and your fob/phone and your PIN. – gowenfawr Oct 30 '14 at 19:30
  • 1
    I've seen this sort of thing followed exactly zero times in projects that were ultimately successful. Its hard, anyway, since most security standards are contradictory, either incompatible with current practices or other standards, or actually self-contradictory (of course, this is true for nearly any non-trivial standard, not just security ones). – zxq9 Oct 31 '14 at 05:43
  • 1
    @AyeshK - Depending on the amounts involved, and how much it would cost to implement/for compliance/in the case of breaches, you may have better luck using one of the existing payment processors (like PayPal or Amazon's). They manage all this stuff for you, for a fee. Dealing with stuff of this nature is non-trivial, and people are becoming more wary (given the number of breaches occurring). – Clockwork-Muse Oct 31 '14 at 06:40
  • So, attempting to read between the lines here, is it actually impossible for a single-person business to be compliant with PCI-DSS? – Periata Breatta Oct 31 '14 at 12:22
  • @PeriataBreatta, no, but for a business that develops their own software for card processing purposes, it might be. That's a very small minority of DSS businesses! Bear in mind that the PCI DSS scales from the most minor merchant to the largest payment processor - you may want to see how the various SAQs fine-tune it's impact on differently organized businesses. – gowenfawr Oct 31 '14 at 12:39
  • 2
    @PeriataBreatta single-person businesses are compliant with PCI-DSS by keeping away credit card data and ensuring that any storage of CC data is handled by other businesses who are capable of doing it properly. E.g., you redirect the user to a payment gateway and get a token from the gateway confirming the transaction and possibly allowing you recurring payments - all without ever seeing a single CC number. – Peteris Nov 01 '14 at 09:49
19

To a not-insignificant degree, this is (as you mentioned) a trust issue, not a technical one. We try to be careful to as far as we can, hire trustworthy people who won't abuse their positions.

That said, there are a number of controls that can be implemented to either limit unauthorized access, and/or verify that the trust in individuals is well-placed and not, in fact, being abused.

Here are some of those controls:

  • Secrets should be kept secret. Keys should not be built into software. They should be generated and managed by those who administer and/or use an application instance, not by the developer of the application. This means that the keys used in a dev environment are going to be different from those in the QA environment, and most certainly different than those used in prod, and there's rarely a reason for a developer to have access to a production environment, much less access to the keys there.
  • Separation of duties. This carries on from the end of the last point. Developers develop applications, network engineers manage network traffic and devices, server engineers administer servers, database administrators watch over the data, and so on. In most cases, it would be unreasonable for a developer to have access to production servers and databases housing real, sensitive data like credit card information.
  • Verification of work. In this case, we're talking about code review, primarily. Again, in most cases, there's no reason a developer should be able to push code that does who-knows-what through into production without somebody else taking a look at it. While this is explicitly designed to catch unintentional mistakes and that best practices and conventions are followed, it should have the helpful side effect of ensuring that most intentionally malicious additions should be noticed, and red flags raised.

There are countless other controls that could be potentially listed, but these are some of the primary categories that most of them will fall into.

Xander
  • 35,525
  • 27
  • 113
  • 141
  • Happy rep birthday Xander! What's it like having moderator tools? – paj28 Oct 30 '14 at 19:25
  • 1
    Thanks for your answer. It seems, despite all the encryption we have, there is going to be trust for people as well. In company with enough manpower, this would be definitely possible. In a 2-3 group small company, I think trusting each other is the best bet. Thanks again for the answer. I marked the earlier one as "the answer", but this is equally good and helpful to me. – AKS Oct 30 '14 at 19:25
  • 1
    Most businesses don't even have enough employees to assign a different human to each of the roles you state should "in most cases" be kept separate. You seem to assume we're talking about corporates here, but most businesses are not corporates. Restricting to the US, since they're easy to find lots of census data for, three quarters of businesses have no employees at all (they're run by self-employed folks) and over half of the remainder have fewer than 5. The idea that developers should not have access to production doesn't really work when there's nobody else to take on that role. – Mark Amery Oct 30 '14 at 20:53
  • @paj28 Thanks! Much the same, with some additional interesting data. – Xander Oct 30 '14 at 23:36
  • @MarkAmery To be pedantic, most business (and particularly small ones) don't develop their own applications at all. It's when you move into larger organizations where that becomes prevalent. So, yes, the numbers are debatable and the correctness of the term "most" is relative, but separation of duties is always a relevant class of controls, and should always be considered to determine if and where it is an appropriate option. – Xander Oct 30 '14 at 23:40
  • @MarkAmery ecactly, so what you just said means that PCI DSS prohibits most companies to store CC data unless they make significant changes at significant extra expense, or delegate CC handling to others. – Peteris Nov 01 '14 at 09:51
8

The cost of preventing this is enormous and so it is rarely done outside of huge, well funded development groups. The mentions above of code review, security review, etc. are all good ideas, but in practice customers are more interested in getting functioning code than delaying use of their assets for months while review processes happen.

The majority case my company deals with is medium-sized businesses that are willing to spend resources getting custom software written for in-house use, but not splurging on some glacially-paced ISO conformant development committee just so their customer contact tracking system or project management database can be improved.

Practically speaking there is almost no way to prevent this sort of abuse other than to deal only with software vendors you trust. This isn't a solution, of course, but it at least sets the customer's mind right and may guide them to pick business partners carefully -- and a software vendor is a business partner, one of the most intimate any company will have, though people seem amazingly blind to this most of the time.

Consider the scandals that came out over the last few years with Google, Apple, Microsoft, etc. and NSA involvement. Or even Google's self-directed privacy invasions. The developers were making sure someone could steal their customers' data, and in a way that the security review processes -- which these particular organizations are large enough to afford -- did not catch. Its really a "Quis custodiet ipsos custodes?" problem (lit. "Who guards the guards?").

In my own case we have determined that we will never hold customer data ourselves. That means we stand up cheap little servers local to customer sites, and those serve the business directly. This is the era of insanely fast internet and cheap hardware; a small business doesn't need a cloud service to access their data from anywhere in the world. To ensure safety and data redundancy we provide over-the-wire backup, but its all encrypted blobs, so we can't read it.

We could certainly open holes in their servers and abuse their trust if we wanted. But there is no way to stop someone evil from doing that. As the owner of my company I've decided that the best balance of security VS usability (for us and the client) is to have them hold their data, and us only keep encrypted backups of it.

I mentioned the "cloud" above. That's probably the single largest threat to data security ever imagined by anyone so far, and there are exactly zero ways to guarantee protection of customer data once it is out of their hands. "Possession is 90% of the law" is a good lesson, because in the modern era its 90% of data security.

zxq9
  • 340
  • 2
  • 8
  • I agree that realistically most companies just don't do much and just trust their IT people but I fail to see the point in your advice. Choose software vendors carefully based on what? The fact that their name sounds familiar and they haven't had a widely publicized breach yet is not much of a guarantee of anything. As you said, even in your setup, you could still pretty much do anything. – Relaxed Oct 31 '14 at 07:27
  • 3
    @Relaxed Its more about understanding the nature of the threat than countering it directly. For example, segregating data internally so that a breach in any one area is less of an issue, or having different vendors implement accounting and customer recordkeeping systems. If a customer isn't even aware of the threat they have no hope of assessing the risk. Businesses manage risk, it is core to the function of the free market. If they aren't aware of this one there is nothing they can do but stick their heads in the sand. – zxq9 Oct 31 '14 at 07:35
7

For small-medium enterprises where one developer wears multiple hats (DBA, sysadmin, tech support, webmaster, etc), the task of satisfying PCI DSS requirement would be too onerous. On possible solution to prevent a developer from obtaining sensitive data is to use a third-party API where processing and storing of sensitive data happens on a trusted third-party website instead of your own website.

In the case of credit card transactions and recurring payment management, you can use PayPal, which is PCI DSS compliant, instead of rolling your own system. Of course, the code still needs to be vetted through to ensure that customers are indeed redirected to the third-party website during transaction.

At the end of the day, you got to start trusting someone (a trusted developer or a trusted third-party) who is hired to do the job for you. Otherwise, you got to do everything yourself.

Question Overflow
  • 5,220
  • 6
  • 27
  • 48
3

Often the developer won't have complete access to the customer database.

In my company, all our development is done on anonymised databases - credit card numbers, personal details etc are removed and things are jumbled up. The live databases are on the customer machines and junior/mid level developers simply don't have read access on those tables.

We could access them using the system passwords, but to do so we'd be logged both retrieving the file to extract the password, and logging in from the 'wrong' machine to the database.

Other systems I've seen include encrypting the credit card details and the key being unavailable to the developer.

At the end of the day a sufficiently determined developer could access almost anything, but by making it hard to do you avoid the casual temptations and by logging you make it clear that there will be repercussions.

Jon Story
  • 674
  • 6
  • 8
  • A quick note on the point you made about logging: If you can log that somebody did something, then you can stop them from doing it(and still log that they tried to). I claim, that you should do this, so the only security holes are the ones in which you cannot log. Unless you're talking about catch-all logging, like keyloggers on the machine, and outgoing packet logs and such. Keyloggers are client side, and therefore theoretically bypassable, but logging packets of course would happen elsewhere. – Cruncher Oct 31 '14 at 15:29
  • That's situational - my example was based on the fact that even if developers can't read credit card tables in a database, by definition the production system has to. By using the production password the developer can gain access which cannot necessarily be blocked (without blocking the live system), but by logging it, can be traced back later. It all depends on the specific setup though, and certainly some actions can be logged or trigger alerts, while blocking the attempt. – Jon Story Oct 31 '14 at 15:40
2

I'm surprised nobody has mentioned DUKPT, but maybe that's because some of the major payment gateways never got around to supporting it, and perhaps they never will now that TripleDES is subject to brute force attacks. But it was a great idea in its time, and there's no reason something like it couldn't be done with modern encryption. Some vendors are still selling card readers with DUKPT or something like it, and there are some small processors that support the encryption and act as proxies for the larger ones.

I can't add anything to the Wiki article, and I don't claim to fully understand how it works, just what it does. But essentially, the hardware is tamper-resistant and has built-in encryption, and it emits the PAN either encrypted or redacted. Only the payment gateway can decrypt it, so the merchant or the developer of their software cannot compromise it through malice or negligence.

  • Your second point is the important one -- encrypt sensitive data in tamper-resistant and controlled-function hardware, so in transport and breachable systems and (here) storage the data is protected. DUKPT is a clever algorithm well-suited to this use and thus commonly used, but you can use other good-quality encryption (if available) in a secure device and get the benefit and you can use DUKPT in insecure Windows software (e.g. for compatibility) and not get the benefit. – dave_thompson_085 Nov 01 '14 at 14:19
1

In addition to all the answers that explain how to prevent developers from accessing such secret data, there is also a major indirect method: access logs

queries can be logged, as can any shell commands, etc. and these logs should be saved in a way such that they are impossible for an individual developer to delete - that way even if they do have access, red flags can be raised - why do they want ALL the credit card numbers? in production? - the important part of this is that the developer is using their own credentials for the work, and that there aren't any "shared accounts" that don't lead to specific people who can be held responsible for their actions.

user2813274
  • 2,051
  • 2
  • 13
  • 18