18

[Edit] I did complete an analysis and framework of concepts that were included into my thesis for extensions to existing frameworks. All of the information in this thread was useful. The direct link to an extracted and shortened version of document is below for the interested. I'm always open to critical review and impression, though this site may not be the place for continuous debate.

http://www.levii.com/images/documents/secure%20development%20environments.docx

I'm working on a whitepaper [full disclosure: this is for my Master's thesis] that discusses security within the realm of secure software development, and while secure software engineering best practices and standards are well published and documented. The shortfall seems to be that most of these exclusively deal in the domain of the software creation itself and either completely miss, or gloss over the environment in which it is created.

I'm aware of the DISA Enclave STIG, Appendix A and the concept of security zones that are separated for the purposes of software development, testing and production and have personal experiences creating such environments ... what seems to be missing though are audit frameworks (similar to ISO27001 or NIST800-53) and best practice guides that are published in the community.

There is this stackexchange question: Data loss protection in software artifacts and I've also run across a couple of SANS.org whitepapers that very briefly discus the subject. They do seem though to miss at the fundamental question I have regarding industry-wide best practices, frameworks or processes.

So the question(s) I have for the IT Security community here are:

1.) What references are you aware of that discuss these types of separation?
2.) Of course there are policies must be in place, technical solutions in the network architecture, etc., - what do you base these on (or do you do as I have done ... base it off of personal experience and anecdotal knowledge)?
3.) Of the "big 3" ISMS standards (ISO27000, NIST800, FIPS140), do you think that any one is specifically best extended into such an environment? If none, is there a differing set that I should look into?

Of course, I'm absolutely open to anything else the community may have to say on the subject. I've seen some commercial offerings ... they are, of course, reluctant to give out a whole lot of information on their tools, techniques and processes.

iivel
  • 1,583
  • 10
  • 13
  • Secure software for what? In what type of environment: finanical, medical, government, military, corporate R&D? How long do you desire the environment to last: end of the project, end of the product line, end of the company? – this.josh Aug 27 '11 at 03:49
  • My question is less about secure software, and more about creating a secure environment for the production of that software (prevention of exfiltration of source and/or data, protection of production assets from in-development systems). Rather than create a specific environment, I'm looking for a reference framework (or set of frameworks) for guidance (much like NIST 800-53 lays out some fairly specific governance and security controls for the protection of government IT systems but is used fairly extensivley in the commercial world as a guide). – iivel Aug 28 '11 at 16:06
  • 1
    Your goal is exceptionaly difficult. Given the way software is often produced, control over source or and production artifacts is hard to achieve. Software tends to be writen by teams made up of geographically and philosophicaly diverse people. Technologies have developed which make sharing and collaborating easier, and in turn made distribution control and containment more difficult. I am skeptical that most companies will pay the price for the security they desire. For example preventing a programmer from working from home increases security and decreases productivity. – this.josh Aug 29 '11 at 06:57
  • Thanks for the input this.josh. I agree that this is a difficult goal, but there are technolgies that can assist as well. A remote developer could be on a laptop secured w/ checkpoint (or something like it to ensure no config changes) that tunnels into the dev system via VPN and actually develops on a remote desktop session (just 1 idea). Most companies certainly wouldn't pay for the additional overhead, but I have been on DoD projects where it was a contract requirement - so I know that the need (at least to some degree) exists for a common governance and design mechanism. – iivel Aug 29 '11 at 13:17
  • I agree. Certain organizations like the US DoD will pay for security they want. That is why I asked about the type of environment. You can make a lot of progress researching military and governmental organizations, but I am not so sure about other types. I have also seen companies certify to standards like CMMI without gaining benefit. The companies meet the certification standards but their processes are still broken: they frequently use exceptions where the standards become inconvenient. "The nice thing about standards is that you have so many to choose from." Tanenbaum – this.josh Aug 29 '11 at 17:22

2 Answers2

9

This is one of the elements in the OpenSAMM Open Software Assurance Maturity Model as from a governance perspective it is essential to have appropriate separation of development, test and production environments.

Without this separation, there are a number of key risks, including:

  • a change to code that has not been tested could very easily delete or corrupt data in a production environment, or even break the system sufficiently to deny service
  • a developer with access to a production or test environment could subvert system controls in order to commit fraud, disrupt systems or gain unauthorised access

Companies in regulated industries tend to have this separation enforced under audit rules (eg Financial Services) but most industries recommend this as good practice.

In the US, CMMI-DEV, listed on this page is also appropriate, and most SDLC programmes do mandate or at least recommend secure software development environments.

There are also elements in ISO27001:2005 which do apply, but I think they aren't formalised enough yet.

Rory Alsop
  • 61,367
  • 12
  • 115
  • 320
  • I was not aware of the OpenSAAM project, thank you for the link - it appears to be largely what I'm looking for from a governance model. I see that there is a "future" work to map the model with ISO 27002 (and likely others). I will certainly be on the mailing list and perhaps will be able to contribute in the future. – iivel Aug 26 '11 at 19:54
  • Rory, thanks for the answer. This might be a more helpful answer if it explained why it is essential to have separation of these environments, and what "appropriate" means. – D.W. Aug 27 '11 at 04:28
  • I think this answer overrates the benefits of isolation. Even if you do isolate production from development environments, it is still likely to be true that a developer can subvert system controls in order to commit fraud, disrupt systems, or gain unauthorized access -- simply by placing a backdoor in the code. After all, that code is likely to be run in the production environment. There's nothing wrong with isolation; and it is good to separate development, test, and production environments; but I don't think that this is enough to protect you from a malicious developer. – D.W. Aug 28 '11 at 23:01
  • Oh, I wasn't wanting to imply this was sufficient. I agree that you also want review of code by a buddy, and the use of a security review tool like Fortify or other tools @AviD likes:-) And pen testing post implementation. And...... – Rory Alsop Aug 29 '11 at 08:16
  • @Rory & DW. Thank you for the follow ups - and I agree with both of you in that isolation is one of a myriad of protections that needs to be in place. My shortcoming is in finding reference guidance and frameworks for this type of isolation. I'd assumed that I'd need to build much of the framework myself and the issue is finding whitepapers or industry research to support it. – iivel Aug 29 '11 at 13:19
  • 1
    You might want to look also to The Building Security In Maturity Model - http://www.bsimm.com/ – boos Jan 12 '15 at 12:55
6

As a disclaimer, I am also a info sec grad student and have done my fair share of research around this topic. The majority of my research indicates that is not a lack of frameworks in place it is a fundamental misunderstanding of the concepts of security. What happens, at least on every project I have been on (private and public sectors) is that a non-technical person is assigned to be the technical lead or a non-security person is assigned to be the official auditor of a system. What happens in these scenarios is that development cycles seem to be "delayed" because of increased security controls, for instance Tripwire, HIPS (Mcafee's special brand), the anti-virus system, and the automatic backups that occur on encrypted drives. Couple this with the borderline useless STIGS and other technical controls that other teams think are ensuring protection.

As a quick sidebar, the STIGS are useless because:
1) You can write a POAM and completely ignore the STIG
2) A completely STIG'd box can be broken just as fast as a non-STIG'd box

So basically what happens in this scenario is that the development/test teams will complain to leadership that it is impacting their performance as well as the performance of the application. Since it is "easy" to snow someone without a technical background on these issues often times there will be a member or two of a team that is assigned to write POAMs to get the security controls removed. That being said, if you want to have a framework in place it must be at the expense of time and at the expense of "frustration" on the development team.

In terms of securing the environment, the most important thing to realize is that everyone on the team could be a threat, whether it be intentional or unintentional. So we first need to minimize the level of damage that these types of individuals can do. This is where the concept of job rotation comes into play, every 60-90 days (for instance). By forcing the team to switch positions you theoretically increase security as new eyes are brought into the environment and it additionally forces username/passphrase (notice passphrase not password) combinations to be updated. By doing this it also increases the robustness of the team as people will not get bored as easily and can therefore continue to expand their horizons. Please remember I am not saying make a DBA the new physical security guard at the nuclear plant, keep them in technical positions or in whatever field they are specifically trained.

In terms of protecting the system that the team will be working on (individual workstation) it is the standard anti-virus+anti-malware+HIPS/HIDS (if it is truly critical only). As a step of added caution remove all optical drives and solder off all USB ports, if this is not practical install software that is managed from an outside system that reformats anything that is inserted into a USB port, still disable optical drives.

In terms of the network in where the team will be working. Have an established whitelist of IP addresses that developers can hit, this prevents them from setting up FTP servers via a small jar and transporting files to their home systems. It is also of great importance to disable access to sites that are proxies as they have no business trying to hide what they are doing in the environment anyway. Ensure that your configuration files are not sitting on public web servers and also ensure that developers do not have access to these systems.

At the end of the day keep all systems up to date and use diverse vendors to prevent one system compromise from impacting every other one.

WHAT A STIG IS
Since a STIG may be an esoteric piece of security literature I will explain what they are. An example of where these can be found: Windows 7 STIG
This is a step by step checklist of common and known attack vectors and/or weaknesses found inside the default settings of a specific type of application. By following this checklist security is potentially increased, the counter-measure to this is to write a POAM which nullifies the specific check that you need for operation. They exist for a myriad of application types, some specific such as Oracle, and some generic such as , database.

Response to why the Enclave STIG does not necessarily equal a framework
I have read over the enclave STIG and while on the surface it does appear to be a guide of sorts it still forces you to abide by a series of STIGs that are nothing more than checklists. Also, when a review comes up you fail each and everything that the auditing framework does not account for, example being BSD. Since there is no GoldDisk (their scanning tool) for BSD all of your systems that run it are considered completely vulnerable to everything (that an OS could be vulnerable to). Now I will agree that it does provide some very specific architecture and a series of ports that you must disable I would not however call this a framework. It is more of a failed attempt at a security design pattern than anything. I say this because it is mathematically impossible to implement all STIGs for your system and maintain the CIA and have the system function as it was intended.

Woot4Moo
  • 889
  • 6
  • 10
  • The security-related definition of STIG I know-Security Technical Implementation Guideline-doesn't make much sense in the context of your answer. Could you explain what you mean by STIG? –  Aug 26 '11 at 15:05
  • The DISA STIG, which are a set of checks to secure a system. When you follow the STIG, in my experience it is referred to as stigging a box. To clarify on my answer, a way of getting around implementing certain checks of the STIG is to write a POAM (plan of action and mitigation). Also, completely following the STIG to the letter does not prevent the system from being compromised. – Woot4Moo Aug 26 '11 at 15:26
  • So how does one "STIG a box"? Store a guideline on it as a Word document? –  Aug 26 '11 at 15:39
  • @Graham that would be far more effective than trying to implement them =p. So if you open a STIG there are a list of items that need to be completed for example the WebServer STIG states something along the lines of the webserver cannot be a privileged user. So to get that checked off you need to ensure that your webserver is not running as a privileged user. – Woot4Moo Aug 26 '11 at 15:43
  • @Woot4Moo. While I understand your point about a bunch of STIGs just being "hardening checklists" there are also a number of them that actually act as the "guides" their names entail. This is the STIG I was referring to: http://iase.disa.mil/stigs/net_perimeter/enclave_dmzs/enclave.html Since this describes an architecture and a method (without telling you specifically how to do it) I would think that it falls under the umbrella of a framework or guide. – iivel Aug 26 '11 at 19:49
  • I will download said zip at home and review, updating my answer as appropriate. – Woot4Moo Aug 26 '11 at 20:08
  • @iivel I have updated my response – Woot4Moo Aug 26 '11 at 20:47
  • @Woot4Moo. Thanks for your updated response. You're certainly outlined the failings of the current guides/frameworks in developing a governance structure for an ITMS/ISMS and provided some essential and practial security controls. Are there any other references that you'd suggest for this type of boundary separation? – iivel Aug 28 '11 at 16:12
  • @iivel let me dig through some of my previous papers and get back to you on thi – Woot4Moo Aug 28 '11 at 16:19
  • The G in STIG stands for Guide. STIGs were originally intended as a starting point. However, since the accreditors tend to have severe tunnel vision, and hold people's feet to the fire over complete, blind compliance to the STIGs, they've become a de facto 'Law.' STIGing a box should be a mere beginning, a nice starting point to building a more secure system, not the official stopping point, with the ability of being penalized for actually improving upon it. – Marcin Oct 22 '12 at 13:29