Campbell's law

Campbell's law is an adage developed by Donald T. Campbell, a psychologist and social scientist who often wrote about research methodology, which states:

"The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."[1]

Applications

Campbell's law can be seen as an example of the cobra effect, which is the sometimes unintended negative effect of public policy and other government interventions in economics, commerce, and healthcare.

Education

In 1976, Campbell wrote: "Achievement tests may well be valuable indicators of general school achievement under conditions of normal teaching aimed at general competence. But when test scores become the goal of the teaching process, they both lose their value as indicators of educational status and distort the educational process in undesirable ways. (Similar biases of course surround the use of objective tests in courses or as entrance examinations.)"[1]

The social science principle of Campbell's law is used to point out the negative consequences of high-stakes testing in U.S. classrooms. This may take the form of teaching to the test or outright cheating.[2] "The High-Stakes Education Rule" is identified and analyzed in the book "Measuring Up: What Educational Testing Really Tells Us".[3]

Campbell’s Law helps people discern that the Obama administration program of Race to the Top and Bush administration program, the No Child Left Behind Act can actually impair, not improve, educational outcome[4].

Similar rules

There are closely related ideas known by different names, such as Goodhart's law and the Lucas critique. Another concept related to Campbell's law emerged in 2006 when UK researchers Rebecca Boden and Debbie Epstein published an analysis of evidence-based policy, a practice espoused by Prime Minister Tony Blair. In the paper, Boden and Epstein described how a government that tries to base its policy on evidence can actually end up producing corrupted data because it "seeks to capture and control the knowledge producing processes to the point where this type of 'research' might best be described as 'policy-based evidence'."[5]

When someone distorts decisions in order to improve the performance measure, they often surrogate, coming to believe that the measure is a better measure of true performance than it really is.[6]

Campbell’s Law imparts a more positive but complicated message. It is important to measure progress making use of quantitative and qualitative indicators.[7] However, utilizing quantitative data for evaluation can distort and manipulate these indicators. Concrete measures must be adopted to reduce alteration and manipulation of information. In his article, “Assessing the Impact of Planned Social Change[8]”, Campbell emphasized that “the more quantitative social indicator used for social decision-making is subjected to corruption pressure and liable to distort and damage social processes it meant to monitor.”

gollark: This works okayish with a bit of instability on low TPS or something.
gollark: Basically, it just sends 2 bytes every two ticks with no clock signal or anything.
gollark: Right. Fair point. I'm sure there's some networking stuff around for networking over a channel where you can't run two things at once.
gollark: I also had the weird idea of networking between adjacent devices by setting labels really fast, but that probably could get by with just some sensible error checking.
gollark: Also, I have this thing for networking (at amazing 20Bps speed) over bundled cables. Perhaps that would be a sensible place to apply Ethernet stuff? It's currently only safe to use between two devices at once (lest others interfere horribly) and has no error correcting stuff.

See also

Notes

  1. Campbell, Donald T (1979). "Assessing the impact of planned social change". Evaluation and Program Planning. 2 (1): 67–90. doi:10.1016/0149-7189(79)90048-X.
  2. Aviv, Rachel (21 July 2014). "Wrong Answer". The New Yorker.
  3. Koretz, Daniel M. (2009). Measuring Up. Harvard University Press. ISBN 978-0-674-03972-8.
  4. "Trust but verify: The real lessons of Campbell's Law | The Thomas B. Fordham Institute". edexcellence.net. Retrieved 2018-06-30.
  5. Boden, Rebecca; Epstein, Debbie (2006). "Managing the research imagination? Globalisation and research in higher education". Globalisation, Societies and Education. 4 (2): 223–236. doi:10.1080/14767720600752619.
  6. Bentley, Jeremiah W. (2017-02-24). "Decreasing Operational Distortion and Surrogation through Narrative Reporting". Rochester, NY. SSRN 2924726. Cite journal requires |journal= (help)
  7. "Quantitative & Qualitative Indicators". Monitoring & Evaluation. Retrieved 2018-06-30.
  8. Campbell, Donald T. (1979-01-01). "Assessing the impact of planned social change". Evaluation and Program Planning. 2 (1): 67–90. doi:10.1016/0149-7189(79)90048-X. ISSN 0149-7189.

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.