31

Background

I'm in charge of auditing a medium-scale web application. I have audited web applications several times before, but I've always written a short PDF quickly explaining what I encountered and usually I'm the one who's gonna be fixing those vulnerabilities so I never cared for the actual content of the report.

In my current job things are done in a more organized fashion. First I have to write the report, then the project manager will review it, then he'll decide whether I'll be the one to fix the issues or someone else.

Question

What should such report contain? I'm looking for a general outline of how it should be organized.


Update: Because I couldn't find anything here on Security.SE about audit reports, I decided to make this question a bit broader and include any kind of security audit rather than just web applications. I think it'll be useful to more people in this case.

Adi
  • 43,808
  • 16
  • 135
  • 167
  • I'd like to add that GIAC has what appears to be a [security audit report on their systems](http://www.giac.org/paper/gcux/67/security-audit-report/101128) ([mirror](https://archive.org/download/security-audit-report_67_/security-audit-report_67_.pdf)). It is a very detailed report with good references and appendices. It's nearly 70 pages but it's definitely worth the read – Adi Oct 01 '14 at 12:29

9 Answers9

26

There's a couple of ways that I've seen this done, each has it's pros and cons.

As noted by @RoryAlsop below a common point for both approaches is that the executive summary should, as much as possible, be written for a business audience (assuming that it's a test you're doing for a 3rd party or the report will be passed to management).

  • Reporting by finding. Here you list the findings, usually ranked by severity (e.g. CVSS score or some other scale like severity/likelihood). You then list out the technical details of the finding and potential mitigations if you have that information. This kind of report gets to the point quite quickly and plays well with tool output.
  • Reporting by methodology. Assuming here that you're following a defined testing methodology, the report is structured along the lines of the methodology and includes a section for each area of the review. The sections detail what testing was done and the outcome (e.g. either a finding or the fact that there was no finding in this section). The advantage here is that you're showing your workings and that someone reading the report can get a good idea that you've actually tested something and it was ok rather than you just having missed it out. The downside is that it tends to be a longer report and harder to automate. One other gotcha is that you need to make sure that the testers don't just follow the methodology and they actually engage brain to look for other things.

In terms of format for the findings, I usually include the following

  • Title (descriptive gets used in the table and linked to the detail)
  • Description - technical description of what the issue is and importantly under what circumstances it is likely to cause a security issue (e.g. for Cross-Site Scripting one of the potential issues is use to grab session tokens which could allow an attacker to get unauthorised access to the application)
  • Recommendations - How the issue should be resolved, where possible include specific details on vendor guidance to fix it (e.g. things like removing web server versions from headers have specific instructions for Apache/IIS etc)
  • References - Any links to additional information that's relevant to the finding (e.g. links to the relevant OWASP page for Web app. issues)
  • Severity. As I mentioned above this could be CVSS or something more general like High/Medium/Low based on impact and likelihood.
  • Other classification as needed by the client. For instance some client might need things lined up against a standard or policy or something like OWASP Top 10.

One other point to make is that if your do a lot of tests it's well worth having a database of previous findings to avoid having to look up references repeatedly and to make sure that severities are consistent.

Rory McCune
  • 60,923
  • 14
  • 136
  • 217
  • All of the answers complement each others, but I must say I like the way this was explained. When you have time, could you please give an example of what you think would be a good entry for a finding? Soon this will top search results on Google about the matter (almost nothing about it anywhere else) so it would be nice to see that in the accepted answer. Thank you – Adi Jan 24 '13 at 18:08
  • 1
    The one other thing I would add here (and I may be biased due to where I have worked the last 10 years) is that including an interpretation of what the report means in **business language** is essential if you want the organisation to understand the real risk to them. This (hopefully) drives correct response to the report. – Rory Alsop Jan 25 '13 at 11:37
  • @RoryAlsop, a _very_ important point, business language. That can play a great role in management's decision on handling these issues. – Adi Jan 25 '13 at 11:43
  • 1
    @Adnan Updated with some details on format of findings, is that the kind of thing you were thinking of? – Rory McCune Jan 25 '13 at 12:09
  • @RoryAlsop good point, added. – Rory McCune Jan 25 '13 at 12:12
  • @RoryMcCune Spot on!! – Adi Jan 25 '13 at 12:12
  • You might also add steps to reproduce. It helps the reader to be able to walk through your report and reproduce the issues themselves, especially as the reader is not typically assumed to be a security expert or necessarily even security savvy – atk Oct 18 '14 at 13:00
  • And if you use your own risk classification instead of a standard like cvss, include its definition in the document as well – atk Oct 18 '14 at 13:01
  • Oh, and I usually like to see/include an attack scenario for anything listed as critical or high, so that a layman can understand the risk. – atk Oct 18 '14 at 13:02
13

Exciting question! Too often I feel that our industry strives for the latest and greatest fad in security. We go after the latest exploits, spend serious cash on the latest tools and blame layer 8 for the gaps. I know that is a gross generalization, but I wanted to underscore the importance of this topic -- Reporting!

I have my opinions as to what should be included in a Vulnerability report. From a structure perspective a thorough report will have the following:

  • A title page: this will indicate the report name, the agency or department it is for, the date as to when the report was published.

  • A table of contents: Seems obvious, but these documents can get lengthy, include this as courtesy.

  • An executive summary: This will be a high level summary of the results, what was found and what the bottom line is.

  • An introduction: A simple statement of your qualifications, the purpose of the audit and what was in scope.

  • Findings: This section will contain your findings and will list the vulnerabilities or issues that should be re-mediate. This listing should be ordered by critical levels, of which are hopefully defined by internal policies (i.e. if your vulnerability scanner finds a high critical vulnerability, based upon how that vulnerability is implemented in your environment, it may not be a true high critical, so internal policies should assist in defining the critical levels)

  • Methodologies: Here you will discuss tools used, how false positives were ruled out, what processes completed this audit. This is to provide consistency and allow your audits to be repeatable in the event a finding is disputed or deemed not worthy of fixing by management.

  • Conclusion: Basic conclusion, summarize the information you have already put together.

  • Appendixes: This will be any extra attachments needed for reference.

Some of the folks over working on the PTES are laying down some good foundations. While the focus there is penetration testing, I would think that many of the methodologies, especially reporting, could be transposed for an audit. You can check them out at http://www.pentest-standard.org/index.php/Reporting.

Adi
  • 43,808
  • 16
  • 135
  • 167
Awhitehatter
  • 361
  • 1
  • 4
  • 1
    This answer is together with @RoryMcCune's the most complete and it should really receive more up-votes than it currently does IMHO. That PTES link you're including was also the first thing I thought about when reading the question. I'm sure a lot of thought and experience went into preparing it, is comprehensive, and gives a good insight into the scope of pen-testing in general. It should be a good start for security audits as well, if possibly a bit overwhelming LOL. Also relevant: http://www.pentest-standard.org/index.php/Intelligence_Gathering (and most of that Wiki, honestly). – TildalWave Mar 31 '13 at 10:47
8

After a Penetration Test or Hybrid Application Analysis the resulting report is centered around the findings. There should be a high level overview that discusses the flaws and their collective impact on the system.

A finding is any security violation. This includes any CWE violation, but the most common web application findings fall under the OWASP top 10. Each finding should have steps to reproduce the problem, a severity, the impact of this flaw, recommendations for fixing the issue and links with more information. For example if you find an XSS vulnerability, show a screen cap of an alert box and the URL you used to trigger the issue and link to the OWASP page on XSS. If you can access the cookie with XSS, then the issue can be used to hijack a session, otherwise it could be used to undermine CSRF protection.

What if you can't find anything? Keep looking! The CWE system is massive, and even the most seasoned develops will make mistakes. There are vulnerabilities that affect literately every application I have touched. Clickjacking, lack of brute force protection, and Information disclosure is probably the most common. For example username/email address disclosure via the forgotten password or signup feature. Displaying version numbers (http headers or anywhere else). Verbose error messages, local paths, internal ip addresses.... anything that might be useful to an attacker.

rook
  • 46,916
  • 10
  • 92
  • 181
  • 5
    First of all, +1 for the appreciated effort. But most of the answer is irrelevant here, you're suggesting things to look for. My problem isn't with _how_ to perform the audit, it's with outputting the results themselves in a professional manner. So I think the helpful part of this answer the second paragraph, could you please expand on that? Thank you. – Adi Jan 24 '13 at 16:29
5

I think what Rook says it very true, though this is more about the core of report, rather than its structure, to be placed after the report format has been designed.

Try the STAR model (Situation, Task, Action & Result): I have seen great reports written with this model under the hood. The great thing about it is that it can be used in almost all contexts: you would just need to adjust to what is relevant to you. In this case, you could structure your report around this model and use what Rook described to fill in the structure.

Also, even if you have no actual findings, you could still write a full report based on the STAR model and still deliver something that is professional and coherent.

Konerak
  • 3,898
  • 2
  • 16
  • 16
Lex
  • 4,247
  • 4
  • 19
  • 27
3

I've never written a security audit report, though in my role I tend to receive them. The best one that we had looked over our whole product at specific areas in interest. The report was broken down into those areas. Overall the format was:

  1. Title
  2. Executive summary - a brief overview of the purpose and scope of the audit. And high level comments, on the main areas of concern, and more importantly cover those areas that are done well
  3. Assessment methodology
  4. System description - covering the deployment being investigated
  5. Then sections on each area/target
  6. Summary and recommendations

Section 5 was broken down for each target as:

  1. Introduction
  2. Objective
  3. Significance
  4. Assessement (covering methodologies and actual vulnerabilities)
  5. Conclusion
Colin Cassidy
  • 1,880
  • 11
  • 19
3

In first: It's like writting a book, first line will keep reader or not. (The Intro is to write at least.)

Something like intro, begin, content, end.

  • Introduction
  • Goals
    • Refresh terms of contract.
    • Description of targets
    • Description of methods
  • Execution
    • Step by step: goal, action, reaction
    • Observations -> questions
    • furter operations, step by step
    • New observations...
    • furter again... and again if in case
  • Conclusion
    • success or not
    • what clearly wrong, having to change
    • what fail, having to be corrected
    • purposes

Some trick having to keep in mind:

  1. Your work is not intended to be read by technical people, so try to stay simple, but
  2. Your work have to be read by technical people, for validation or application of your recommendations, so don't forget anything!! Your work must to be exact, comprehensive and incontestable.
  3. As your job concern security failure, making some obfuscation, like using John Doe:XXXXX for couple username, password may be a good practice, they have to be mentionned at intro, but this could be usefull for further discuss.

So for staying light for the administrative people, Introduction and Conclusion have to be explained in simple manner, maybe using pictural language, and

for staying light for technical people who have to work around your text, be exact and detailed. Maybe a simple console dump with some comments could be enough.

1

How do I recon this situation & what exactly am I addressing?

I construct & architect security operations for a living. I had been addressing Application Security Reports for a decade now & quite sure about what's to be included & where to have them included. Presentation will Matter.

I've given this a fair amount of thought. Looking at all the answers, I feel that the missing parts alongside with the most definitive answer & the knowledge should be focused.

To start off, I would like to go by saying - Please do not confuse between an Assessment & an Audit. Audit has Audit Trails, an Assessment has nitty gritty Technical Details. The Original Post says an Audit was done to the applications which it couldn't be. More technically it were Assessments.

I have picked up several including the Methodology Followed at CERN, ref: http://pwntoken.github.io/enterprise-web-application-security-program/. To my astonishment, most often - the technical details which is as a Security Assessment is Likely to be more helpful to the Developer & It Operations rather than the Business Stakeholders. When you try to Audit an Application or set of Application on a Public Interface, you bring it to the application stakeholder.

Coming to the points of my sample Application Report, here is how it looks (I apologize for the scribbles as it were absolutely necessary but had to be taken off as per NDA norms):

enter image description here enter image description here enter image description here

Let's explain what are these components in key pointers:

  1. First one is Vulnerability Classification: e.g. for XSS, it could be written as Code Injection. For Shell Injection, Interpreter Injection is a more accurate term. Similarly, for SQL Injections, rather MS-SQLi or MySQli, the Classification should be Database Injection.
  2. The Next is Vulnerability Title: For Database Injection, it always could be more accurate in one liner like UNION BASED MySQL Injection Leads to Command Level Compromise.
  3. Next is Risk Rating: In my opinion, I would go by WASC or anything, but I preferred our own custom rating circuit. One can look for OWASP, WASC or others if you have been told to stick to a particular methodology. NIST would be one if you're dealing mostly with network security.
  4. Description: This should be as detailed as possible. It sometimes may happen that a classification set isn't found due to the threat being Business Logic in nature. For those, it's necessary to have a great sense of understanding context & why the attack scenario is put as such.
  5. Impact: Again, I would say, mention this is hard pointers in bullets. That is healthy & hygienic to business stakeholders as well.
  6. Proof Of Concept: I think this is pretty self explanatory. But include details including screenshots. Another input would be that you include parameters which are affected & also note down the endpoint in case it's an API which's affected alongside with it's POST parameters if any.
  7. Recommendation & Remediation: I think, this is explanatory as well. Keep a generic template for the OWASP Top 10, SANS 15 & WASC Top 26 ones. For the rest, use manual written context based recommendations as it helps your IT Operations.
  8. Fix Responsibility: Who's fixing. In your case, it's you!

Hope this will help.

Shritam Bhowmick
  • 1,602
  • 14
  • 28
1

This may sound like a cop-out but it isn't, and that is to simply ask what format they like. I've performed all sorts of reviews and most companies have a format of what information they like and how they like it. Ask to see previous reports as a template, it will save you loads of time.

GdD
  • 17,291
  • 2
  • 41
  • 63
  • I have to disagree with you, while the company _does_ have its own template for certain tasks, I believe a security audit report should have a more-or-less standard format. What if the company decided to hire a third-party to fix the problem? – Adi Jan 24 '13 at 16:37
  • @Adnan, as a third party which often does security assessments I can tell you that I always attempt to use the customer's report format. Maybe your company doesn't have one, but it is worth asking as you could save lots of time. – GdD Jan 24 '13 at 16:41
  • I have to agree with this one. I understand @Adnan's objection, but the ***purpose*** for such a report is (or should be) to get the company to ***understand*** and ***fix*** problems - ASAP. Every company has a built-up corporate culture, and develops common ways of communicating. In good ones, the various business units all "speak the same language" which includes gradually adopting similar looking reports, similar looking memos, etc. If you want the company to quickly act, give it to them in the format they like, so they don't have to "translate" it into something they understand. – David Stratton Jan 24 '13 at 17:58
  • That said, I also agree that there are some things that a report should always include. The "adapt to the business" is in ***how*** the information is presented, not ***what*** information is presented. – David Stratton Jan 24 '13 at 18:01
  • Put that way, I'm now able to see both sides of this. Thanks David & GdD. – Adi Jan 24 '13 at 18:03
0

If the goal of a security audit report is to persuade management to remediate security weaknesses found, then you want to describe the impact of not fixing the issues. As an IT auditor, I frequently meet resistance from non - technical management members about recommendations I make such as:

  1. Its too costly to implement
  2. There is not enough return to the business to justify the effort
  3. No tangible economic benefit

You want to describe in you report what the impact of not remediating security vulnerabilities would be in business terms such as:

  1. The cost of lost business will be approximately $X dollars if a security vulnerability is exploited by an adversary.
  2. X hours of downtime (RTO) will result during which we will be unable to service our customers as a result of a threat exploiting a vulnerability to the Availability of our systems.

In summary, your goal is to obtain business buy-in so that security is transformed from solely an IT function to a function having negative economic and non - economic (ex: damaged reputation) ramifications if vulnerablities are not heeded to.

Anthony
  • 1,736
  • 1
  • 12
  • 22