Assessment in higher education

Assessment is a systemic process in higher education that uses empirical data on student learning to refine programs and improve student learning.[1] As a continuous process, assessment establishes measurable and clear student learning outcomes for learning, provisioning a sufficient amount of learning opportunities to achieve these outcomes, implementing a systematic way of gathering, analyzing and interpreting evidence to determine how well student learning matches expectations, and using the collected information to inform improvement in student learning.[2] Assessment functions as part of a continuous process whereby the parts of the cycle are revised and monitored. The term “assessment” is defined broadly in that any outcome or goal in any activity or discipline can be a part of this process.

Types

Assessment in higher education can focus on the individual learner, a course, an academic program, or the institution.

Course-level assessment

Assessment embedded at the course level (sometimes referred to as embedded assessment orAuthentic_assessment) typically involves the use of assignments.[3] Students receive feedback on their performance on assignments and faculty gain knowledge of student learning to use for grading.[4] The work assessed within courses best relates to specific program-level student learning outcomes. Angelo and Cross[5] believe assessment in the classroom is an important part of the faculty feedback loop which can provide meaningful information about their effectiveness as teachers while also giving students a measure of their progress as learners.

Student perception of feedback

Studies show feedback is valued by students.[6] Feedback that is timely, specific and delivered individually helps to reinforces this perception.[7] This type of feedback, usually referred to as Just In Time, helps to create a feedback loop between student and teacher. Students generally find more utility from formative feedback when they are also presented with strategies of how to use the feedback.[8] These strategies help with perception because they tackle lack of understanding of academic discourse which hinders students' ability to use the feedback effectively.[6]

Quality of feedback

Timing is crucial in the delivery of feedback to students.[9] Kift and Moody claim that the complexity of the assignment should dictate how soon feedback should provided. For simpler tasks, feedback should be provided within 24 hours. However, if the task is more complicated, giving students time for reflection before providing feedback is more beneficial. "Effective feedback should be task related and focus on student performance rather than personal attributes of the student." [10] Studies have shown that the way feedback is delivered can have either positive or negative effects on the student.[11] Corrective feedback helps to move student learning forward and improves future assessments.

Principles of good feedback

  1. Facilitates the development of self-assessment (reflection) in learning.
    1. Students might request the kinds of feedback they want.
    2. Students can identify the strengths and weaknesses in their own work, based on a rubric, before giving it to the teacher for feedback.
    3. Students reflect on their achievements.
    4. Teacher and student set milestones So they can reflect back and on progress, and forward on what to do next
    5. Students give peer feedback to one another.
  2. Encourages teacher and peer dialogue around learning.
    1. The use of one-minute-papers [5]
    2. Read feedback given by teacher, and discuss with other students.
    3. Discussing feedback that students found useful and why.
    4. Group projects.
  3. Helps clarify what good performance is (goals, criteria, standards expected).
    1. Provide students with good examples along with feedback.
    2. Discussion about criteria in the classroom.
    3. Include student participation during feedback process.
    4. Collaborate with students on creating grading/feedback rubric.
  4. Provides opportunities to close the gap between current and desired performance.
    1. Increase the number of opportunities for resubmitting assignemnents.
    2. Teacher models how to close the learning gap.
    3. Include “Action Points” for students along with feedback.
      1. Alternatively, have the students figure out their own action points.
  5. Delivers high quality information to students about their learning.
    1. Being cognizant of the number of criteria used in feedback to ensure it remains effective, by limiting the amount of feedback.
    2. Providing feedback soon after the activity.[12]
    3. Provide corrective advice.
    4. Prioritizing areas of improvement.
  6. Encourages positive motivational beliefs and advocates for self-efficacy.
    1. Giving students a grade only after they've responded to feedback.
    2. Allowing students time to rewrite certain parts of their work based on feedback.
    3. Automated testing.
  7. Provides information to teachers that can be used to help shape the teaching.
    1. Exit-ticket/One-minute papers.
    2. Students request feedback they want.
    3. Having students identify where they are having trouble.
    4. Students work in groups to choose one idea they are unclear about and share that idea.[13]

Program-level assessment

Program assessment is a best practice in higher education.[1] The process involves a framework for placing priority and attention on the process of student learning and most specifically, the program objectives, organization of curriculum, pedagogy and student development.[1] Like course assessment, program assessment requires defining a statement of mission/goals, establishment of program-specific student learning outcomes and the identification of where learning takes place or “learning opportunities”. The next part in program assessment involves the development of a research question or intended goal for assessment. What questions does the program seek to answer? And what direct or indirect evidence needs to be collected to identify answers? The collected data is evaluated, analyzed and interpreted resulting in the implementation of an action plan resulting in improvement in the program and student learning.[14]

Mission alignment

Each course a student takes occurs within the context of a program, which occurs within the context of overarching university outcomes. With the assumption that coursework should support the program and programs should support the overall mission of the university, alignment of mission (and learning outcomes) should occur. Assessment at the course level typically takes the form of tests, quizzes, and assignments. When courses are mapped to program outcomes, this permits the aggregation of data from several courses covering the same outcome which can be used for program assessment. Additional program assessment can take the form of embedded assignments, field experiences, capstone experiences, portfolios, or tests of majors.

Scoring guides

Rubrics are often used to assess student work. Essentially, a rubric is a scoring guide grid consisting of a scale of some sort (i.e., levels of performance), the dimensions or important components of an assignment, and descriptions of what constitutes each level of performance for each assignment dimension. Rubrics can be particularly effective for assessment due to how closely they are tied with the teaching and learning process - they can be used for grading, as well as giving students feedback on their performance.[15]

Assessment points

Assessment is most effective when it occurs at multiple points in time along the student's path. Multiple measures over time provide a way to triangulate data and increase confidence in the results.

Effective assignments

In order to assess student learning, students must given assignment where they can demonstrate what they know and can do.[16]

Indirect and direct measures

A distinction is made between direct and indirect measures of learning. Direct measures, as their name implies, involve directly examining student work products to assess the achievement of learning outcomes. These work products occur in a variety of formats including objective tests, and rubric-scored projects, performances, and written work. A recent survey of provosts indicates that classroom based assessment and rubrics are most frequently used. Large scale commercial tests such as the Collegiate Learning Assessment (CLA) are used by fewer than 50% use standardized tests according to the survey. Indirect measures focus on data from which one can make inferences about learning. Indirect measures can include surveys on student and faculty perceptions about learning, focus groups, and exit interviews. National surveys such as the National Survey of Student Engagement (NSSE) have become increasing popular indirect measures, with roughly 85% of institutions using these measures according to a recent survey.

Sampling

In a classroom setting or in a program level assessment, it is often possible to assess the entire population of interest, referred to as a census. However, it is sometimes impractical or ineffective to assess an entire population, due to the time and effort involved as well as survey fatigue if the same group of students are being asked to take multiple surveys. Therefore, sampling strategies can be used to pick a subset of the population of interest. The goal of sampling is to select a smaller group that represents the population on key characteristics. Multiple sampling approaches are commonly used in higher education assessment, including random and stratified sampling. In a random sample, each individual is equally likely to be selected. In a stratified sample, individuals are grouped based on specific characteristics of interest and then randomly selected from each group to ensure adequate numbers of each group.

Use of findings

Assessment data are only effective in "closing the loop" and improving programs if they are shared and communicated widely.

Benchmarking

Benchmarking is a way for an institution or program to determine how a sample measures up to others or to themselves at an earlier time.

Professional organizations

There are numerous regional organizations dedicated to discussing issues and policies related to assessment in higher education. The Association for the Assessment of Learning in Higher Education (AALHE) is one international organization. Seven other regional assessment organizations exist in the United States.[17][18][19]

In an interview with the Chronicle for Higher Education, Marsha Watson, former director of the AALHE, stated that the “rising demands for accountability mean that assessment must evolve into its own discipline."[20]

The National Institute for Learning Outcomes Assessment is another organization dedicated to helping institutions use assessment data to improve academic quality. They have delivered a number of research papers on assessment practices.[21]

Benefits

There is heightened political and public pressure on higher education institutions to explain what they are trying to do and provide evidence they are actually doing it.[5] Faculty want students to learn. In addition, faculty love their disciplines and want to share their knowledge and enthusiasm with students. Placing emphasis on what students learn and what students do helps to effectively drive improvement in the learning process, program planning and overall institutional improvement.[5] Assessment adds transparency to the teaching and learning process, helps to provide some evidence to the effectiveness of student learning and promotes an environment where continuous improvement is well understood and ingrained in the institutional culture.

Linda Suskie, a higher education consultant, says that "Good assessments are not once-and-done affairs. They are part of an ongoing, organized, and systematized effort to understand and improve teaching and learning.[22]

Criticism

Some university faculty and researchers have criticized student learning outcomes assessment in higher education. Robert Shireman, a senior fellow for the Century Foundation, argued that accrediting agencies often require institutions to reduce learning to meaningless blurbs, or student learning outcomes, which “prevents rather than leads to the type of quality assurance that has student work at the center.[23]” Erik Gilbert, a professor of History, wrote another notable essay criticizing assessment in higher education arguing that it has little effect on educational quality and that accrediting agencies require institutions to invest time and resources in collecting evidence on student learning even though, he believes, that it does not improve academic quality.[24] Molly Worthen [25] also criticized assessment for its seeming lack of empirical evidence indicating it improves student learning. However, Matthew Fuller [26] and others have developed the Surveys of Assessment Culture, aimed at examining the foundations of institutional cultures of assessment through empirical studies.

gollark: Although I did once have ubq compliment me on my python styling.
gollark: Yeeees.
gollark: JS will actually let you get away with this, too:```javascriptif (window.containsBees) console.log("bee contamination detected") console.log("initiating quarantine")```
gollark: They are optional.
gollark: Yes, this is much better:```javascriptfunction produceApioform() {let apioform = "apio"apioform += Math.random() > 0.5 ? "hazard" : "form"return apioform}```

References

  1. Allen, M.J. (2004). Assessing Academic Programs in Higher Education. San Francisco: Jossey-Bass. ISBN 978-1882982677.
  2. Suskie, Linda (2004). Assessing Student Learning. Bolton, MA: Anker.
  3. NC State University. "Course-based Assessment Overview" (PDF). NC State Division of Academic and Student Affairs. Retrieved 2 March 2017.
  4. Whelburg, C.M. (2008). Promoting Integrative and Transformative Assessment. San Francisco: Jossey-Bass.
  5. Angelo, Thomas; Cross, K. Patricia (1993). Classroom Assessment Techniques: A Handbook for College Teachers. San Francisco: CA: Josses-Bass. ISBN 1555425003.
  6. Weaver, Melanie R. (June 2006). "Do students value feedback? Student perceptions of tutors' written responses" (PDF). Assessment & Evaluation in Higher Education. 31 (3): 379–394. doi:10.1080/02602930500353061. ISSN 0260-2938.
  7. Murphy, Carole; Cornell, Jo (2010). "Student Perceptions of Feedback: Seeking a Coherent Flow". Practitioner Research in Higher Education. 4 (1): 41–51. ISSN 1755-1382.
  8. Jonsson, Anders (March 2013). "Facilitating productive use of feedback in higher education". Active Learning in Higher Education. 14 (1): 63–76. doi:10.1177/1469787412467125. ISSN 1469-7874.
  9. Kift, Sally M.; Moody, Kim E. (2009). "Harnessing assessment and feedback in the first year to support learning success, engagement and retention". ATN Assessment Conference 2009 Proceedings. RMIT University, Melbourne. Archived from the original on 2012-03-29. Retrieved 2019-03-03.
  10. Weston-Green, Katrina; Wallace, Margaret (September 2016). "A method of providing engaging formative feedback to large cohort first-year physiology and anatomy students". Advances in Physiology Education. 40 (3): 393–397. doi:10.1152/advan.00174.2015. ISSN 1043-4046. PMID 27503899.
  11. Hattie, John; Timperley, Helen (March 2007). "The Power of Feedback". Review of Educational Research. 77 (1): 81–112. doi:10.3102/003465430298487. ISSN 0034-6543. S2CID 82532100.
  12. Just-in-time teaching - Wikipedia
  13. Nicol, David J.; Macfarlane‐Dick, Debra (April 2006). "Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice". Studies in Higher Education. 31 (2): 199–218. doi:10.1080/03075070600572090. ISSN 0307-5079.
  14. University of Hawaii at Manoa Assessment Office. "Basic Steps of Program Assessment". Retrieved 2 March 2017.
  15. Stevens, D.D.; A.J. Levi (2013). Introduction to Rubrics. Sterling: Stylus.
  16. Hargreaves, D. J. (December 1997). "Student Learning and Assessment Are Inextricably Linked". European Journal of Engineering Education. 22 (4): 401–409. doi:10.1080/03043799708923471. ISSN 0304-3797.
  17. "Virginia Assessment Group". Virginia Assessment Group Website. Retrieved 10 March 2017.
  18. "Chicago Area Assessment Group". CAAG: Chicago Area Assessment Group. Retrieved 10 March 2017.
  19. "Washington Area Student Learning Assessment Network". The George Washington University Office of Academic Planning & Assessment. Retrieved 10 March 2017.
  20. Glenn, David (2011-06-02). "Learning-Assessment Specialists to Gather at Group's First Conference". The Chronicle of Higher Education. Retrieved 28 March 2017.
  21. Suskie, Linda. "Why Are We Assessing". Inside Higher Ed. Retrieved 28 March 2017.
  22. Suskie, Linda. "Assessing student learning: A common sense guide". John Wiley & Songs, Inc. p. 50. Missing or empty |url= (help)
  23. Shireman, Robert Shireman. "SLO Madness". Inside Higher Ed. Retrieved 10 April 2017.
  24. Gilbert, Erik (2015-08-14). "Does Assessment Make Colleges Better? Who Knows?". The Chronicle of Higher Education. Retrieved 10 April 2017.
  25. Worthen, Molly (2018-02-23). "The Misguided Drive to Measure 'Learning Outcomes'". The New York Times. Retrieved 16 April 2018.
  26. Fuller, Matthew. "Assessment Culture Research". Survey of Assessment Culture. Retrieved 16 April 2018.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.