Strategies for Peer Review in Higher Education Assessment

January 21, 2015 Kristen Hicks

Peer review is treated like a sacred concept in academia. The importance of having data and conclusions backed up by trustworthy peers is a cornerstone of what academics consider quality. It’s what separates academic research from the more unreliable sources common in the media and on the Web. Because of all the trust and confidence faculty members place in the peer-review process, St. John Fisher College decided to experiment with applying it to something new: assessment.

When a school commits to developing a culture of assessment, exams and the data they produce become a powerful component of decision-making. If you base important decisions about your curriculum and courses on exam analytics, you need to have confidence that the exams are beneficial. Bringing the familiar and revered peer-review process into the equation creates that confidence.

You can get all the details regarding St. John Fisher’s experiences with peer-reviewed exam assessment from our webinar with Dr. Jennifer Mathews, an associate professor there. Here’s the basic rundown.

1) They made it a voluntary program for faculty. You don’t want faculty members to feel as if they’re being forced to do more work, but if you can make a convincing case for the value of successful exams and reliable learning assessment, you should be able to get a significant number of them to support a new approach. Since the participants will be invested in the process, they will be more committed to improving exam questions.

2) Two or three peers review each exam created by a participating faculty member. Having more than one peer go over each exam means more opportunities to find errors or flaws in the exam questions. When reviewing each exam question, reviewers consider the following:

• Is the question clear?
• Are there any typos that might cause confusion?
• Is the question coded to the appropriate category of Bloom’s Taxonomy?
• Has the question performed well on past exams (if applicable)?
• Is the content and difficulty level appropriate to the course?
• Does it have any of the issues that complicate writing good questions, such as “positive” language, jargon, abbreviations, and double jeopardy?

Reviewers don’t make changes to an exam directly—that’s still the responsibility of the person who created it. Instead, they make comments in the “Internal Comments” field provided for each question in ExamSoft.

3) The original exam creator can review the suggestions and make improvements. After the reviewers have made their notes, the faculty member who created the exam can review them and strengthen the questions based on that feedback. Once students take the test, the faculty member will have new data to help him or her determine if the exam questions are successful or if they need to be revised further. That data can also be applied to improving the course materials, gaining a clear idea of how to best use class time, and identifying which students need extra help.

One of the surprising results of introducing this peer-review process for exam management at St. John Fisher College was the response of students. They genuinely appreciated having clearer exam questions and higher-quality exams. In addition, since it took them less time to understand the questions being asked, they could focus more on answering them.

You can learn more about this process and ask Dr. Mathews any questions you may have by listening to this webinar. The results St. John Fisher College achieved are worth hearing about.

 

Previous Article
How Assessment Features Can Help Produce Stellar Grads
How Assessment Features Can Help Produce Stellar Grads

Dental education is more than teaching dentistry to future doctors - it's improving learning outcomes and d...

Next Article
Using Rubrics to Measure Affective Learning
Using Rubrics to Measure Affective Learning

Structuring rubrics to measure board examination competencies can help track student performance by tagging...