Simplifying the Assessment Process for Practical Exams

April 27, 2016 Sarah Zahl

Many institutions are using rubrics to conduct practical assessments in various areas of the curriculum. For the purposes of this post, practical assessments include any evaluations that measure a student’s ability to demonstrate knowledge, skills, or behaviors related to curricular content.

Planning and facilitating practical assessments can be quite complex since there are multiple participants and several moving parts of the evaluation. For example, faculty often evaluate students with rubrics while students perform a task to show evidence of their competency or ability. To further complicate this process, many practical assessments also include peer evaluation and self-evaluation, which require the administration of additional rubrics and scoring components. 

In this post, I will share how we simplified our assessment process for practical exams. These changes have led to improved outcomes and positive feedback from faculty and students.

Sharing rubrics with students: If you read my previous post about the importance of releasing rubrics to students, you know that I value this practice as a mechanism to support assessment transparency and communication. Additionally, releasing rubrics to students through ExamSoft saves time and eliminates confusion. When rubrics are accessible for students, they see their true results rather than receiving a simple score or summary of performance. Viewing the full rubric allows students to see the objective nature of an otherwise subjective assessment, eliminating the need for additional explanations or a series of questions. Releasing the full rubric in ExamSoft also eliminates the extra time required to copy and paste results into another system. Students can see their results in real time within the system. 

Generalized language: To facilitate streamlining of our rubrics, we revised the language within the dimensions to be general rather than specific. This tactic allows us to reuse the same rubric template for multiple types of assessments, requiring only small edits for specific elements of the next assessment. For example, our previous rubric dimensions for Objective Structured Clinical Exams (OSCEs) listed specific diagnosis elements (e.g., a full checklist related to hypertension) related to each assessment. This specificity throughout the rubric limited our ability to reuse the rubric for subsequent assessments without making substantial edits. By changing the language to a general description (e.g., student asked all pertinent patient history questions related to appropriate clinical diagnosis), the rubric can be used again without significant edits. Faculty graders and students were aware of the diagnosis already, so it was not necessary to include all of the specifics within the rubric.

Simplified scoring: We received feedback from faculty indicating that scoring rubric-based assessments was taking significantly longer than expected. When grading takes a large amount of time, the turnaround time for student feedback increases, which often leads to frustration. To simplify the grading process and decrease the burden on faculty, we modified our rubrics for some assessments from a five-level model (e.g., greatly exceeds/exceeds/meets/does not meet/needs significant improvement) to a three-level model (e.g., exceeds/meets/needs improvement) to provide clarity and simplicity for graders. This change has improved grading turnaround time and students’ understanding of exactly where their performance falls within the spectrum. Further, this simplified process has removed some of the objective ambiguity related to differences in assessment language.

These three major changes have led to improved outcomes in all four phases of our practical assessments: planning, implementation, data collection, and data utilization. Our efficiency and effectiveness have improved, and we are receiving positive feedback from key stakeholders and collaborators in our assessment process.

 

About the Author

Sarah Zahl

Sarah B. Zahl, Ph.D., is the Director of Educational Assessment at the Marian University College of Osteopathic Medicine. Dr. Zahl earned her Ph.D. and M.S. in Higher Education from Indiana University and a B.S. degree in Journalism from Butler University. She has nine years of experience in academic and student affairs in higher education. In addition to her administrative roles, she has taught courses in Education, Qualitative and Quantitative Research Methods, and College Teaching and Learning. Dr. Zahl’s academic interests include competency based assessment, mapping the curriculum, and tracking student success factors during graduate study.

More Content by Sarah Zahl
Previous Article
Reasons to Attend the Examsoft Assessment Conference (EAC)
Reasons to Attend the Examsoft Assessment Conference (EAC)

The second annual ExamSoft Assessment Conference is coming up June 13-15. Almost all attendees from last ye...

Next Article
Choosing the Best Computer-Based Testing Solution for Nursing Schools
Choosing the Best Computer-Based Testing Solution for Nursing Schools

Nursing schools have every reason to evolve to incorporate technology into more of their programs. Computer...