Dr. Sherry Jimenez, Assistant Dean for Educational Development at Marian University College of Osteopathic Medicine
Sarah B. Zahl, Ph.D., Director of Educational Assessment at Marian University College of Osteopathic Medicine
To prepare students to become competent practitioners in the field when they graduate, we need to know if students can apply their knowledge and perform certain tasks in a practical setting. By creating an environment that mimics what students will experience in the field, faculty can evaluate if students can successfully perform skills that will be required of them when they become practitioners. If your curriculum includes these practical components that require an evaluation of student performance, you have most likely used rubrics for this purpose.
When Marian University College of Osteopathic Medicine opened in August 2013, we began collecting practical data within our clinical courses as part of our medical curriculum for first year students. When we began operating at full capacity with two classes of D.O. students, the need to automate this process became paramount. Before automating our collection of practical data, we used another system tactically, albeit unsuccessfully (data collection was extremely manual; consuming countless hours of data entry by multiple parties…really anyone who was willing to help!). With the advent of ExamSoft’s multiple grader rubrics feature, we have greatly improved our ability to automate the evaluation process and our embedded assessment capabilities where direct observation is needed to augment assessment of student performance.
How Automated Multiple Grader Rubrics Helped
Let’s use our Osteopathic Principles and Practice (OPP) course as an example. Within this course, there are numerous osteopathic manipulative techniques students must perform adequately in each anatomical region and sub-region. To ensure that our students are competent at performing these techniques, our clinical faculty must observe and assess students while conducting these techniques with simulated patients. These evaluations take place in our Osteopathic Manipulative Medicine (OMM) lab where we rotate students through a number of “stations” (OMM tables), each representing a different body region/sub-region. Students are expected to demonstrate their abilities to perform the respective technique under the watchful eyes of their evaluators. Since the assessments are designed to include multiple stations, multiple graders are involved in each evaluation.
Originally, this was a manual process due to the complex nature of multiple students rotating to multiple stations and numerous regions and sub-regions. Students were expected to know multiple regions, but they were assessed on only 2-4 regions per exam. To ensure that students could perform proper techniques on all of the body regions, the students did not know which regions were included on a given exam. Therefore, multiple rubrics were designed and randomly distributed to multiple graders at the time of the exam.
Using a multiple-grader option available with our online examination technology platform, ExamSoft, we were able to expedite and automate our grading process. After creating a few mock assessments to test the system, our assessment team met with faculty representatives to illustrate the benefits of converting our practical assessments to the online system. After examining and refining the rubrics to align with our technology’s framework, we collaborated with faculty to pilot the tool.
To simplify the assessment process, we did the following in ExamSoft:
- Pre-determined (with course director) which regions and sub-regions were included in each assessment
- Assigned students to each region and/or sub-region (but did not share this information with students)
- Assigned specific graders to specific students prior to exam
- Preset the above parameters in ExamSoft
What resulted was our ability to build fewer rubrics and minimize the time it takes to rotate students through the exam. iPads are used to evaluate each student via the rubrics in real time and electronically, alleviating the need for back-end data entry of this information into ExamSoft. Since we now assign faculty graders to students in advance, a grader sees only his/her students in a sorted list. This promotes convenience and ease of use for faculty graders. Additionally, we can release grades and written feedback to students in an expedited manner.
An electronic rubrics platform also allows us to track competencies and/or learning outcomes by linking each rubric dimension to specific performance criteria. For example, we track student performance in the competency domains of Patient Care, Communication, Empathy, etc. After each assessment, we can pull a report that indicates how students performed within each competency/outcome.
Additionally, we review average assessment scores from each faculty grader in real time. We can immediately determine if our graders are assessing students in a standardized way. Students often express concern that one faculty member may be grading them differently from another faculty grader. However, the data in the system indicates that we display a fair amount of consistency among faculty graders. Previewing and comparing grading data serves as a form of inter-rater reliability before we release assessment scores to students.
This design can be replicated for a number of practical-type evaluations in osteopathic medical education as well as other healthcare education disciplines in which multiple graders are observing and evaluating student performance. This grading process allows us to continuously refine our rubrics and capture practical data across our curriculum. Our next step is to implement this feature in more clinical courses and continue to refine our rubrics to incorporate additional practical exam scenarios.