Hierarchy of evidence

Hierarchy of evidence is a ranking of the quality of evidence used in evidence-based medicine. There is variation among hierarchies, with more than 80 documented,[1] but they follow a basic pattern of attempting to rank evidence based on:

  • Relevance to the target population (usually humans, but potentially also animals in veterinary medicine), e.g. studies on humans are more relevant to humans than studies on rodents
  • Throwing out bad studies entirely. This can be for lots of reasons, and is done in quality reviews and meta-analyses. Reasons that studies should usually be ignored are: poor design, not being peer-reviewed, not having controlled sufficiently for biases, or having research funding by an industry that had an interest in the outcome.[2][3]
It is important to recognize and remove bad studies from reviews and meta-analyses (and it is important to keep in mind that bad studies can appear in peer-reviewed journals) because bad studies can taint the outcomes of the reviews/meta-analyses.[2]
  • Statistical power, e.g. all other things being equal, studies on more individuals have more statistical power (and can therefore potentially give results with greater statistical significance) than studies on fewer individuals
  • Predictive power, that the evidence shows causation rather than just correlation — e.g., some epidemiological studies can show evidence of causation[4]
The poetry of reality
Science
We must know.
We will know.
A view from the
shoulders of giants.
v - t - e

An explicit hierarchy of evidence was first described in 1979 by the Canadian Task Force on the Periodic Health Examination.[1][5][6] Implicit hierarchies of evidence can be inferred from literature reviews that predate 1979 however, such as the International Agency for Research on Cancer's monograph series that began in 1972.[7]

A generally accepted hierarchy of evidence is:[2][8]

  1. Systematic reviews and meta-analyses
  2. Randomized controlled trials with definitive results
  3. Randomized controlled trials with non-definitive results
  4. Cohort studies
  5. Case-control studies
  6. Cross-sectional surveys
  7. Case reports

IARC Monographs, particularly the more recent ones, use a hierarchy that is something like this, where evidence at the bottom can be used to support evidence at the top for overall evaluations:[7]

  1. Systematic reviews and meta-analyses
  2. Randomized controlled trials in humans
  3. Cohort studies in humans
  4. Case-control studies in humans
  5. Chronic studies in animals
  6. Acute studies in animals, in vitro studies, and mechanistic studies

Criticisms

Hierarchies are a poor basis for the application of evidence in clinical practice. The Evidence-Based Medicine movement should move beyond them and explore alternative tools for appraising the overall evidence for therapeutic claims.
—Christopher J. Blunt[9]

Hierarchies of evidence are not without criticism, but the criticism has come from within the philosophy of science and has generally been ignored within evidence-based medicine.[10]

Problems with evidence hierarchies include:[9][11]

  • There is no method for evaluating which of the many different hierarchies is better for a given problem.
  • Hierarchies of evidence are at best a heuristic and lack both empirical and theoretical justification.
  • Studies with high internal validity (well-controlled and with a homogeneous study group) might have low external validity for a heterogeneous population.

Although there are serious criticisms of the hierarchy of evidence in evidence-based medicine, there has been no serious proposal for replacing this type of methodology in dealing with prioritization of the sometimes massive amounts of evidence relevant to a given problem.

gollark: If I ever finish Dragon autocrafting, the recipes will be solved.
gollark: They're so overcomplicated.
gollark: Also, the recipes: That is a bad point.
gollark: In OC, there is a ROM'd sandbox thingy, but it does much less.
gollark: It's in ROM.

References

  1. Hierarchies of Evidence Chris J. Blunt
  2. How to read a paper: Getting your bearings (deciding what the paper is about) by Trisha Greenhalgh (1997) BMJ 315:243-315.
  3. The Oprah effect and why not all scientific evidence is valuable: Some studies are more equal than others by Julia Belluz (Nov 9, 2011) Maclean's.
  4. Causation in epidemiology by M. Parascandola & D. Weed (2001) J. Epidemiol. Community Health 55(12): 905–912. doi: 10.1136/jech.55.12.905.
  5. The Levels of Evidence and their role in Evidence-Based Medicine by Patricia B. Burns et al. (2011) Plast. Reconstr. Surg.128(1): 305–310. doi:10.1097/PRS.0b013e318219c171.
  6. The periodic health examination by the Canadian Task Force on the Periodic Health Examination (1979) CMA Journal 121:1193-1254.
  7. IARC Monographs on the Evaluation of Carcinogenic Risk to Humans International Agency for Research on Cancer
  8. Users' guides to the medical literature. IX. A method for grading health care recommendations by G. H. Guyatt GH et la. (1995) JAMA 274:1800-4.
  9. Hierarchies of evidence in evidence-based medicine by Christopher J Blunt (2015) PhD thesis, The London School of Economics and Political Science.
  10. Looking for Rules in a World of Exceptions: reflections on evidence-based practice by Ross E. G. Upshur (2005) Perspect. Biol. Med. 48(4): 477-89.
  11. Philosophical critique exposes flaws in medical evidence hierarchies: Rankings of research reliability are logically untenable, an in-depth analysis concludes by Tom Siegfried (2:30pm, November 13, 2017) Science News.
This article is issued from Rationalwiki. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.