United States National Research Council rankings

The United States National Research Council conducts a survey and compiles a report on United States Research-Doctorate Programs approximately every 10 years, although the time elapsed between each new ranking has exceeded 10 years.

Methodology

Data collection for the most recent report began in June 2006;[1] it was released on September 28, 2010. These rankings did not provide exact ranks for any university or doctoral program; rather, a statistical range was given. This was because "the committee felt strongly that assigning to each program a single number and ranking them accordingly would be misleading, since there are significant uncertainties and variability in any ranking process."[2]

Two series of rankings were offered:

  • The R-rankings were based on regression analysis. According to the NRC, this analysis was "based on an indirect approach to determining what faculty value in a program" and was done by first asking a sample faculty group to rate a number of programs in their area, and then using a statistical analysis "to calculate how the 20 program characteristics would need to be weighted in order to reproduce most closely the sample ratings." In doing so, the rankings "attempted to understand how much importance faculty implicitly attached to various program characteristics when they rated the sample of programs." Weights were assigned to each of characteristic varied by field.[2]
  • The S-rankings were survey-based: Faculty were "asked about the importance of 20 characteristics ... in determining the quality" of a type of program. Weights were assigned to determinant according to the results, varying by discipline.[3]

The factors included in these computations included[4] the number of publications per faculty member, citations per publication (except in computer science and the humanities), fraction of the faculty supported by grants and number of grants per faculty member, diversity of the faculty and students, student GRE scores, graduate student funding, number of Ph.D.s and completion percentage, time to degree, academic plans of graduating students, student work space, student health insurance, and student activities.

Reception

The rankings have both been praised and criticized by academics.

Physicist Peter Woit stated that historically the NRC rankings have been the "gold standard" for academic department ratings.[5] The rankings were also called "the gold standard" by biomedical engineer John M. Tarbell[6] and in news releases by Cornell University[7] and the University of California.[8] The Center for a Public Anthropology praised the National Research Council's 2010 rankings as "an impressive achievement" for its move away from reputational rankings and toward data-based rankings, but also noted that the lack of specific rankings reduced clarity even as it improved accuracy.[9] William Colglazier and Jeremiah P. Ostriker defended the rankings in the Chronicle of Higher Education,[10] responding to a critique by Stephen M. Stigler.[11]

Sociologist Jonathan R. Cole, one of the members of the NRC committee that produced the ranking, critiqued the final result. Cole objected to the committee's choice not to include any "measures of reputational standing or perceived quality" in the survey, which he called "the most significant misguided decision" in the recent study. Cole also critiqued the various statistical inputs and the weight assigned to each.[12] The Computing Research Association and various computer science departments also expressed "serious concerns" about vaguely defined reporting terms leading to inconsistent data, inaccuracies in the data, and the use of bibliometrics from the ISI Web of Knowledge despite its poor coverage of many computer science conferences.[13][14][15][16][17] Geographers A. Shortridge, K. Goldsberry, and K. Weessies found significant undercounts in the data and poor sensitivity to "noise" in the rankings, concluding that "We caution against using the 2010 NRC data or metrics for any assessment-oriented study of research productivity."[18] The rankings were also critiqued by sociologist Fabio Rojas.[19]

gollark: It's approximately similar.
gollark: And we haven't particularly grown in active member count recently.
gollark: It has *mostly* up until issues this year.
gollark: The majority of active members know each other fairly well.
gollark: We should probably aim to... not do this... but it's hard to actually encourage sane well-reasoned debate *even if you don't agree with the other person*.

References

  1. Morse, Robert (July 9, 2009), "The Wait for the National Research Council Rankings Continues", U.S. News & World Report
  2. "A Data-Based Assessment of Research-Doctorate Programs in the United States: Frequently Asked Questions" (2010). United States National Research Council.
  3. Assessment of Research Doctorate Programs, U.S. National Academies
  4. "A Data-Based Assessment of Research-Doctorate Programs in the United States" (2010).
  5. Peter Woit, "NRC Rankings" (September 27, 2010). Not Even Wrong.
  6. "NRC Ranks CCNY PhD Program Among Best in US" (September 19, 2011). City College of New York.
  7. Susan Kelley, "CU awaits release of gold standard of grad school rankings" (September 16, 2010). Cornell University.
  8. Andy Evangelista, "Ph.D. programs rank high in National Research Council report Archived 2012-03-03 at the Wayback Machine" (September 28, 2010). University of California.
  9. "Overview." Center for a Public Anthropology.
  10. E. William Colglazier and Jeremiah P. Ostriker, "Counterpoint: Doctoral-Program Rankings—the NRC Responds." (October 17, 2010). Chronicle of Higher Education.
  11. David Glenn, "A Critic Sees Deep Problems in the Doctoral Rankings" (September 30, 2010). Chronicle of Higher Education.
  12. Cole, Jonathan R. (April 24, 2011), "Too Big to Fail: How 'better than nothing' defined the National Research Council's graduate rankings", Chronicle of Higher Education
  13. Erroneous NRC Ranking Data for UW CSE Archived 2010-10-01 at the Wayback Machine, University of Washington Department of Computer Science and Engineering, retrieved 2010-09-29.
  14. NRC Doctoral Rankings and Computer Science, Peter Harsha, Computing Research Association, September 28, 2010. Retrieved 2010-09-29.
  15. Glenn, David (October 6, 2010), "Computer Scientists Cry Foul Over Data Problems in NRC Rankings", Chronicle of Higher Education
  16. Grimson, Eric (May 2010), "Dangers of Rankings with Inaccurate Data", Computing Research News
  17. Bernat, Andrew; Grimson, Eric (December 2011), "Doctoral program rankings for U.S. computing programs: the national research council strikes out", Communications of the ACM, 54 (12): 41–43, doi:10.1145/2043174.2043203
  18. Shortridge, Ashton; Goldsberry, Kirk; Weessies, Kathleen (2011), "Measuring Research Data Uncertainty in the 2010 NRC Assessment of Geography Graduate Education", Journal of Geography, 110 (6): 219–226, doi:10.1080/00221341.2011.607510
  19. "NRC Rankings: Was It a Big Fail?? (September 30, 2010). Orgtheory.net.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.