Weighted Micro Function Points

Weighted Micro Function Points (WMFP) is a modern software sizing algorithm which is a successor to solid ancestor scientific methods as COCOMO, COSYSMO, maintainability index, cyclomatic complexity, function points, and Halstead complexity. It produces more accurate results than traditional software sizing methodologies[1], while requiring less configuration and knowledge from the end user, as most of the estimation is based on automatic measurements of an existing source code.

As many ancestor measurement methods use source lines of code (SLOC) to measure software size, WMFP uses a parser to understand the source code breaking it down into micro functions and derive several code complexity and volume metrics, which are then dynamically interpolated into a final effort score. In addition to compatibility with the waterfall software development life cycle methodology, WMFP is also compatible with newer methodologies, such as Six Sigma, Boehm spiral, and Agile (AUP/Lean/XP/DSDM) methodologies, due to its differential analysis capability made possible by its higher-precision measurement elements.[2]

Measured elements

The WMFP measured elements are several different software metrics deduced from the source code by the WMFP algorithm analysis. They are represented as percentage of the whole unit (project or file) effort, and are translated into time.

Flow complexity (FC) – Measures the complexity of a programs' flow control path in a similar way to the traditional cyclomatic complexity, with higher accuracy by using weights and relations calculation.
Object vocabulary (OV) – Measures the quantity of unique information contained by the programs' source code, similar to the traditional Halstead vocabulary with dynamic language compensation.
Object conjuration (OC) – Measures the quantity of usage done by information contained by the programs' source code.
Arithmetic intricacy (AI) – Measures the complexity of arithmetic calculations across the program
Data transfer (DT) – Measures the manipulation of data structures inside the program
Code structure (CS) – Measures the amount of effort spent on the program structure such as separating code into classes and functions
Inline data (ID) – Measures the amount of effort spent on the embedding hard coded data
Comments (CM) – Measures the amount of effort spent on writing program comments

Calculation

The WMFP algorithm uses a three-stage process: function analysis, APPW transform, and result translation. A dynamic algorithm balances and sums the measured elements and produces a total effort score. The basic formula is:

∑(WiMi)∏Dq
M = the source metrics value measured by the WMFP analysis stage
W = the adjusted weight assigned to metric M by the APPW model
N = the count of metric types
i = the current metric type index (iteration)
D = the cost drivers factor supplied by the user input
q = the current cost driver index (iteration)
K = the count of cost drivers

This score is then transformed into time by applying a statistical model called average programmer profile weights (APPW) which is a proprietary successor to COCOMO II 2000 and COSYSMO. The resulting time in programmer work hours is then multiplied by a user defined cost per hour of an average programmer, to produce an average project cost, translated to the user currency.

Downsides

The basic elements of WMFP, when compared to traditional sizing models such as COCOMO, are more complex to a degree that they cannot realistically be evaluated by hand, even on smaller projects, and require a software to analyze the source code. As a result, it can only be used with analogy based cost predictions, and not theoretical educated guesses.

gollark: <@319753218592866315> Also, what's the way to do HTTP requests in the interpret function?
gollark: <@319753218592866315> Also, how exactly does one test it?
gollark: Oh, and cowsay.
gollark: Can I send in a thing which runs the Linux fortunes command?
gollark: I'm saying it's perfectly possible to implement WHYJIT as it allows me to send arbitrary code to execute.

See also

References

  1. Capers Jones (October 2009) "Software Engineering Best Practices": pages 318–320
  2. TickIT Quarterly publication (2009) "Quarter 1, 2009": page 13
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.